00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1069 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3731 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.116 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.107 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.119 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.131 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.131 > git config core.sparsecheckout # timeout=10 00:00:05.144 > git read-tree -mu HEAD # timeout=10 00:00:05.158 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.184 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.184 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.299 [Pipeline] Start of Pipeline 00:00:05.309 [Pipeline] library 00:00:05.310 Loading library shm_lib@master 00:00:05.311 Library shm_lib@master is cached. Copying from home. 00:00:05.324 [Pipeline] node 00:00:05.334 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.335 [Pipeline] { 00:00:05.342 [Pipeline] catchError 00:00:05.344 [Pipeline] { 00:00:05.354 [Pipeline] wrap 00:00:05.362 [Pipeline] { 00:00:05.368 [Pipeline] stage 00:00:05.369 [Pipeline] { (Prologue) 00:00:05.561 [Pipeline] sh 00:00:05.846 + logger -p user.info -t JENKINS-CI 00:00:05.860 [Pipeline] echo 00:00:05.861 Node: WFP4 00:00:05.867 [Pipeline] sh 00:00:06.161 [Pipeline] setCustomBuildProperty 00:00:06.170 [Pipeline] echo 00:00:06.172 Cleanup processes 00:00:06.176 [Pipeline] sh 00:00:06.457 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.457 673200 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.468 [Pipeline] sh 00:00:06.754 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.754 ++ grep -v 'sudo pgrep' 00:00:06.754 ++ awk '{print $1}' 00:00:06.754 + sudo kill -9 00:00:06.754 + true 00:00:06.775 [Pipeline] cleanWs 00:00:06.791 [WS-CLEANUP] Deleting project workspace... 00:00:06.791 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.805 [WS-CLEANUP] done 00:00:06.808 [Pipeline] setCustomBuildProperty 00:00:06.818 [Pipeline] sh 00:00:07.095 + sudo git config --global --replace-all safe.directory '*' 00:00:07.161 [Pipeline] httpRequest 00:00:08.016 [Pipeline] echo 00:00:08.017 Sorcerer 10.211.164.20 is alive 00:00:08.023 [Pipeline] retry 00:00:08.025 [Pipeline] { 00:00:08.035 [Pipeline] httpRequest 00:00:08.039 HttpMethod: GET 00:00:08.039 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.040 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.058 Response Code: HTTP/1.1 200 OK 00:00:08.058 Success: Status code 200 is in the accepted range: 200,404 00:00:08.058 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.628 [Pipeline] } 00:00:12.641 [Pipeline] // retry 00:00:12.646 [Pipeline] sh 00:00:12.926 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.941 [Pipeline] httpRequest 00:00:13.404 [Pipeline] echo 00:00:13.406 Sorcerer 10.211.164.20 is alive 00:00:13.415 [Pipeline] retry 00:00:13.418 [Pipeline] { 00:00:13.433 [Pipeline] httpRequest 00:00:13.437 HttpMethod: GET 00:00:13.437 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:13.438 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:13.458 Response Code: HTTP/1.1 200 OK 00:00:13.458 Success: Status code 200 is in the accepted range: 200,404 00:00:13.459 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:36.710 [Pipeline] } 00:01:36.724 [Pipeline] // retry 00:01:36.730 [Pipeline] sh 00:01:37.012 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:39.559 [Pipeline] sh 00:01:39.841 + git -C spdk log --oneline -n5 00:01:39.841 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:39.841 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:39.841 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:39.841 66289a6db build: use VERSION file for storing version 00:01:39.841 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:39.859 [Pipeline] withCredentials 00:01:39.869 > git --version # timeout=10 00:01:39.883 > git --version # 'git version 2.39.2' 00:01:39.900 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:39.902 [Pipeline] { 00:01:39.912 [Pipeline] retry 00:01:39.914 [Pipeline] { 00:01:39.929 [Pipeline] sh 00:01:40.212 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:40.483 [Pipeline] } 00:01:40.501 [Pipeline] // retry 00:01:40.506 [Pipeline] } 00:01:40.523 [Pipeline] // withCredentials 00:01:40.532 [Pipeline] httpRequest 00:01:41.055 [Pipeline] echo 00:01:41.057 Sorcerer 10.211.164.20 is alive 00:01:41.066 [Pipeline] retry 00:01:41.068 [Pipeline] { 00:01:41.082 [Pipeline] httpRequest 00:01:41.087 HttpMethod: GET 00:01:41.087 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.088 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.091 Response Code: HTTP/1.1 200 OK 00:01:41.092 Success: Status code 200 is in the accepted range: 200,404 00:01:41.092 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:47.521 [Pipeline] } 00:01:47.538 [Pipeline] // retry 00:01:47.546 [Pipeline] sh 00:01:47.837 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:49.231 [Pipeline] sh 00:01:49.516 + git -C dpdk log --oneline -n5 00:01:49.516 eeb0605f11 version: 23.11.0 00:01:49.516 238778122a doc: update release notes for 23.11 00:01:49.516 46aa6b3cfc doc: fix description of RSS features 00:01:49.516 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:49.516 7e421ae345 devtools: support skipping forbid rule check 00:01:49.526 [Pipeline] } 00:01:49.539 [Pipeline] // stage 00:01:49.548 [Pipeline] stage 00:01:49.551 [Pipeline] { (Prepare) 00:01:49.569 [Pipeline] writeFile 00:01:49.586 [Pipeline] sh 00:01:49.872 + logger -p user.info -t JENKINS-CI 00:01:49.885 [Pipeline] sh 00:01:50.169 + logger -p user.info -t JENKINS-CI 00:01:50.182 [Pipeline] sh 00:01:50.468 + cat autorun-spdk.conf 00:01:50.468 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.468 SPDK_TEST_NVMF=1 00:01:50.468 SPDK_TEST_NVME_CLI=1 00:01:50.468 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.468 SPDK_TEST_NVMF_NICS=e810 00:01:50.468 SPDK_TEST_VFIOUSER=1 00:01:50.468 SPDK_RUN_UBSAN=1 00:01:50.468 NET_TYPE=phy 00:01:50.468 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:50.468 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.475 RUN_NIGHTLY=1 00:01:50.480 [Pipeline] readFile 00:01:50.504 [Pipeline] withEnv 00:01:50.506 [Pipeline] { 00:01:50.519 [Pipeline] sh 00:01:50.806 + set -ex 00:01:50.806 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:50.806 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:50.806 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.806 ++ SPDK_TEST_NVMF=1 00:01:50.806 ++ SPDK_TEST_NVME_CLI=1 00:01:50.806 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.806 ++ SPDK_TEST_NVMF_NICS=e810 00:01:50.806 ++ SPDK_TEST_VFIOUSER=1 00:01:50.806 ++ SPDK_RUN_UBSAN=1 00:01:50.806 ++ NET_TYPE=phy 00:01:50.806 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:50.806 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.806 ++ RUN_NIGHTLY=1 00:01:50.806 + case $SPDK_TEST_NVMF_NICS in 00:01:50.806 + DRIVERS=ice 00:01:50.806 + [[ tcp == \r\d\m\a ]] 00:01:50.806 + [[ -n ice ]] 00:01:50.806 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:50.806 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:50.806 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:50.806 rmmod: ERROR: Module i40iw is not currently loaded 00:01:50.806 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:50.806 + true 00:01:50.806 + for D in $DRIVERS 00:01:50.806 + sudo modprobe ice 00:01:50.806 + exit 0 00:01:50.815 [Pipeline] } 00:01:50.830 [Pipeline] // withEnv 00:01:50.835 [Pipeline] } 00:01:50.849 [Pipeline] // stage 00:01:50.858 [Pipeline] catchError 00:01:50.860 [Pipeline] { 00:01:50.874 [Pipeline] timeout 00:01:50.874 Timeout set to expire in 1 hr 0 min 00:01:50.876 [Pipeline] { 00:01:50.890 [Pipeline] stage 00:01:50.892 [Pipeline] { (Tests) 00:01:50.906 [Pipeline] sh 00:01:51.193 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.193 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.193 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.193 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:51.193 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.193 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:51.193 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:51.193 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:51.193 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:51.193 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:51.193 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:51.193 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:51.193 + source /etc/os-release 00:01:51.193 ++ NAME='Fedora Linux' 00:01:51.193 ++ VERSION='39 (Cloud Edition)' 00:01:51.193 ++ ID=fedora 00:01:51.193 ++ VERSION_ID=39 00:01:51.193 ++ VERSION_CODENAME= 00:01:51.193 ++ PLATFORM_ID=platform:f39 00:01:51.193 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:51.193 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.193 ++ LOGO=fedora-logo-icon 00:01:51.193 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:51.193 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.193 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:51.193 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.193 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.193 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.193 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:51.193 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.193 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:51.193 ++ SUPPORT_END=2024-11-12 00:01:51.193 ++ VARIANT='Cloud Edition' 00:01:51.193 ++ VARIANT_ID=cloud 00:01:51.193 + uname -a 00:01:51.193 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:51.193 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:53.732 Hugepages 00:01:53.732 node hugesize free / total 00:01:53.732 node0 1048576kB 0 / 0 00:01:53.732 node0 2048kB 0 / 0 00:01:53.732 node1 1048576kB 0 / 0 00:01:53.732 node1 2048kB 0 / 0 00:01:53.733 00:01:53.733 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.733 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:53.733 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:53.733 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:53.733 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:53.733 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:53.733 + rm -f /tmp/spdk-ld-path 00:01:53.733 + source autorun-spdk.conf 00:01:53.733 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.733 ++ SPDK_TEST_NVMF=1 00:01:53.733 ++ SPDK_TEST_NVME_CLI=1 00:01:53.733 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.733 ++ SPDK_TEST_NVMF_NICS=e810 00:01:53.733 ++ SPDK_TEST_VFIOUSER=1 00:01:53.733 ++ SPDK_RUN_UBSAN=1 00:01:53.733 ++ NET_TYPE=phy 00:01:53.733 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:53.733 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.733 ++ RUN_NIGHTLY=1 00:01:53.733 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.733 + [[ -n '' ]] 00:01:53.733 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.733 + for M in /var/spdk/build-*-manifest.txt 00:01:53.733 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.733 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.733 + for M in /var/spdk/build-*-manifest.txt 00:01:53.733 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.733 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.733 + for M in /var/spdk/build-*-manifest.txt 00:01:53.733 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.733 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.733 ++ uname 00:01:53.733 + [[ Linux == \L\i\n\u\x ]] 00:01:53.733 + sudo dmesg -T 00:01:53.733 + sudo dmesg --clear 00:01:53.993 + dmesg_pid=674694 00:01:53.993 + [[ Fedora Linux == FreeBSD ]] 00:01:53.993 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.993 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.993 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.993 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.993 + sudo dmesg -Tw 00:01:53.993 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.993 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.993 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.993 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.993 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.993 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.993 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.993 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.993 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.993 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.993 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.993 16:07:42 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:53.993 16:07:42 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.993 16:07:42 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:53.993 16:07:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:53.993 16:07:42 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.993 16:07:42 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:53.993 16:07:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.993 16:07:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:53.993 16:07:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.994 16:07:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.994 16:07:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.994 16:07:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.994 16:07:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.994 16:07:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.994 16:07:42 -- paths/export.sh@5 -- $ export PATH 00:01:53.994 16:07:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.994 16:07:42 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:53.994 16:07:42 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:53.994 16:07:42 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734361662.XXXXXX 00:01:53.994 16:07:42 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734361662.yaorlf 00:01:53.994 16:07:42 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:53.994 16:07:42 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:01:53.994 16:07:42 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.994 16:07:42 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:53.994 16:07:42 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:53.994 16:07:42 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.994 16:07:42 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:53.994 16:07:42 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:53.994 16:07:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.994 16:07:42 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:53.994 16:07:42 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:53.994 16:07:42 -- pm/common@17 -- $ local monitor 00:01:53.994 16:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.994 16:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.994 16:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.994 16:07:42 -- pm/common@21 -- $ date +%s 00:01:53.994 16:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.994 16:07:42 -- pm/common@21 -- $ date +%s 00:01:53.994 16:07:42 -- pm/common@25 -- $ sleep 1 00:01:53.994 16:07:42 -- pm/common@21 -- $ date +%s 00:01:53.994 16:07:42 -- pm/common@21 -- $ date +%s 00:01:53.994 16:07:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734361662 00:01:53.994 16:07:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734361662 00:01:53.994 16:07:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734361662 00:01:53.994 16:07:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734361662 00:01:53.994 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734361662_collect-vmstat.pm.log 00:01:53.994 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734361662_collect-cpu-load.pm.log 00:01:53.994 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734361662_collect-cpu-temp.pm.log 00:01:53.994 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734361662_collect-bmc-pm.bmc.pm.log 00:01:55.375 16:07:43 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:55.375 16:07:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.375 16:07:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.375 16:07:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.375 16:07:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.375 Mon Dec 16 03:07:43 PM UTC 2024 00:01:55.375 16:07:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.375 v25.01-rc1-2-ge01cb43b8 00:01:55.375 16:07:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:55.375 16:07:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.375 16:07:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.375 16:07:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.375 16:07:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.375 16:07:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.375 ************************************ 00:01:55.375 START TEST ubsan 00:01:55.375 ************************************ 00:01:55.375 16:07:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:55.375 using ubsan 00:01:55.375 00:01:55.375 real 0m0.000s 00:01:55.375 user 0m0.000s 00:01:55.375 sys 0m0.000s 00:01:55.375 16:07:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:55.375 16:07:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:55.375 ************************************ 00:01:55.375 END TEST ubsan 00:01:55.375 ************************************ 00:01:55.375 16:07:43 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:55.376 16:07:43 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:55.376 16:07:43 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:55.376 16:07:43 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:55.376 16:07:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.376 16:07:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.376 ************************************ 00:01:55.376 START TEST build_native_dpdk 00:01:55.376 ************************************ 00:01:55.376 16:07:43 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:55.376 eeb0605f11 version: 23.11.0 00:01:55.376 238778122a doc: update release notes for 23.11 00:01:55.376 46aa6b3cfc doc: fix description of RSS features 00:01:55.376 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:55.376 7e421ae345 devtools: support skipping forbid rule check 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:55.376 patching file config/rte_config.h 00:01:55.376 Hunk #1 succeeded at 60 (offset 1 line). 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:55.376 patching file lib/pcapng/rte_pcapng.c 00:01:55.376 16:07:43 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:55.376 16:07:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:55.377 16:07:43 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:55.377 16:07:43 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:55.377 16:07:43 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:55.377 16:07:43 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:55.377 16:07:43 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:55.377 16:07:43 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:59.570 The Meson build system 00:01:59.570 Version: 1.5.0 00:01:59.570 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:59.571 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:59.571 Build type: native build 00:01:59.571 Program cat found: YES (/usr/bin/cat) 00:01:59.571 Project name: DPDK 00:01:59.571 Project version: 23.11.0 00:01:59.571 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.571 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:59.571 Host machine cpu family: x86_64 00:01:59.571 Host machine cpu: x86_64 00:01:59.571 Message: ## Building in Developer Mode ## 00:01:59.571 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.571 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:59.571 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.571 Program python3 found: YES (/usr/bin/python3) 00:01:59.571 Program cat found: YES (/usr/bin/cat) 00:01:59.571 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.571 Compiler for C supports arguments -march=native: YES 00:01:59.571 Checking for size of "void *" : 8 00:01:59.571 Checking for size of "void *" : 8 (cached) 00:01:59.571 Library m found: YES 00:01:59.571 Library numa found: YES 00:01:59.571 Has header "numaif.h" : YES 00:01:59.571 Library fdt found: NO 00:01:59.571 Library execinfo found: NO 00:01:59.571 Has header "execinfo.h" : YES 00:01:59.571 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.571 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.571 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.571 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.571 Run-time dependency openssl found: YES 3.1.1 00:01:59.571 Run-time dependency libpcap found: YES 1.10.4 00:01:59.571 Has header "pcap.h" with dependency libpcap: YES 00:01:59.571 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.571 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.571 Compiler for C supports arguments -Wformat: YES 00:01:59.571 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.571 Compiler for C supports arguments -Wformat-security: NO 00:01:59.571 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.571 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.571 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.571 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.571 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.571 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.571 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.571 Compiler for C supports arguments -Wundef: YES 00:01:59.571 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.571 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.571 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.571 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.571 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.571 Program objdump found: YES (/usr/bin/objdump) 00:01:59.571 Compiler for C supports arguments -mavx512f: YES 00:01:59.571 Checking if "AVX512 checking" compiles: YES 00:01:59.571 Fetching value of define "__SSE4_2__" : 1 00:01:59.571 Fetching value of define "__AES__" : 1 00:01:59.571 Fetching value of define "__AVX__" : 1 00:01:59.571 Fetching value of define "__AVX2__" : 1 00:01:59.571 Fetching value of define "__AVX512BW__" : 1 00:01:59.571 Fetching value of define "__AVX512CD__" : 1 00:01:59.571 Fetching value of define "__AVX512DQ__" : 1 00:01:59.571 Fetching value of define "__AVX512F__" : 1 00:01:59.571 Fetching value of define "__AVX512VL__" : 1 00:01:59.571 Fetching value of define "__PCLMUL__" : 1 00:01:59.571 Fetching value of define "__RDRND__" : 1 00:01:59.571 Fetching value of define "__RDSEED__" : 1 00:01:59.571 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.571 Fetching value of define "__znver1__" : (undefined) 00:01:59.571 Fetching value of define "__znver2__" : (undefined) 00:01:59.571 Fetching value of define "__znver3__" : (undefined) 00:01:59.571 Fetching value of define "__znver4__" : (undefined) 00:01:59.571 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.571 Message: lib/log: Defining dependency "log" 00:01:59.571 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.571 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.571 Checking for function "getentropy" : NO 00:01:59.571 Message: lib/eal: Defining dependency "eal" 00:01:59.571 Message: lib/ring: Defining dependency "ring" 00:01:59.571 Message: lib/rcu: Defining dependency "rcu" 00:01:59.571 Message: lib/mempool: Defining dependency "mempool" 00:01:59.571 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.571 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.571 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.571 Compiler for C supports arguments -mpclmul: YES 00:01:59.571 Compiler for C supports arguments -maes: YES 00:01:59.571 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.571 Compiler for C supports arguments -mavx512bw: YES 00:01:59.571 Compiler for C supports arguments -mavx512dq: YES 00:01:59.571 Compiler for C supports arguments -mavx512vl: YES 00:01:59.571 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.571 Compiler for C supports arguments -mavx2: YES 00:01:59.571 Compiler for C supports arguments -mavx: YES 00:01:59.571 Message: lib/net: Defining dependency "net" 00:01:59.571 Message: lib/meter: Defining dependency "meter" 00:01:59.571 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.571 Message: lib/pci: Defining dependency "pci" 00:01:59.571 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.571 Message: lib/metrics: Defining dependency "metrics" 00:01:59.571 Message: lib/hash: Defining dependency "hash" 00:01:59.571 Message: lib/timer: Defining dependency "timer" 00:01:59.571 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.571 Message: lib/acl: Defining dependency "acl" 00:01:59.571 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.571 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.571 Run-time dependency libelf found: YES 0.191 00:01:59.571 Message: lib/bpf: Defining dependency "bpf" 00:01:59.571 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.571 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.571 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.571 Message: lib/distributor: Defining dependency "distributor" 00:01:59.571 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.571 Message: lib/efd: Defining dependency "efd" 00:01:59.571 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.571 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:59.571 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.571 Message: lib/gro: Defining dependency "gro" 00:01:59.571 Message: lib/gso: Defining dependency "gso" 00:01:59.571 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.571 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.571 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.571 Message: lib/lpm: Defining dependency "lpm" 00:01:59.571 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.571 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.571 Message: lib/member: Defining dependency "member" 00:01:59.571 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.571 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.571 Message: lib/power: Defining dependency "power" 00:01:59.571 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.571 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.571 Message: lib/mldev: Defining dependency "mldev" 00:01:59.571 Message: lib/rib: Defining dependency "rib" 00:01:59.571 Message: lib/reorder: Defining dependency "reorder" 00:01:59.571 Message: lib/sched: Defining dependency "sched" 00:01:59.571 Message: lib/security: Defining dependency "security" 00:01:59.571 Message: lib/stack: Defining dependency "stack" 00:01:59.571 Has header "linux/userfaultfd.h" : YES 00:01:59.571 Has header "linux/vduse.h" : YES 00:01:59.571 Message: lib/vhost: Defining dependency "vhost" 00:01:59.571 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.571 Message: lib/pdcp: Defining dependency "pdcp" 00:01:59.571 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.571 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.571 Message: lib/fib: Defining dependency "fib" 00:01:59.571 Message: lib/port: Defining dependency "port" 00:01:59.571 Message: lib/pdump: Defining dependency "pdump" 00:01:59.571 Message: lib/table: Defining dependency "table" 00:01:59.571 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.571 Message: lib/graph: Defining dependency "graph" 00:01:59.571 Message: lib/node: Defining dependency "node" 00:01:59.571 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:01.489 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:01.489 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:01.489 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:01.489 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:01.489 Compiler for C supports arguments -Wno-unused-value: YES 00:02:01.489 Compiler for C supports arguments -Wno-format: YES 00:02:01.489 Compiler for C supports arguments -Wno-format-security: YES 00:02:01.489 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:01.489 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:01.489 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:01.489 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:01.489 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.489 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.489 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.489 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:01.489 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:01.489 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:01.489 Has header "sys/epoll.h" : YES 00:02:01.489 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:01.489 Configuring doxy-api-html.conf using configuration 00:02:01.489 Configuring doxy-api-man.conf using configuration 00:02:01.489 Program mandb found: YES (/usr/bin/mandb) 00:02:01.489 Program sphinx-build found: NO 00:02:01.489 Configuring rte_build_config.h using configuration 00:02:01.489 Message: 00:02:01.489 ================= 00:02:01.489 Applications Enabled 00:02:01.489 ================= 00:02:01.489 00:02:01.489 apps: 00:02:01.489 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:01.489 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:01.489 test-pmd, test-regex, test-sad, test-security-perf, 00:02:01.489 00:02:01.489 Message: 00:02:01.489 ================= 00:02:01.489 Libraries Enabled 00:02:01.489 ================= 00:02:01.489 00:02:01.489 libs: 00:02:01.489 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:01.489 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:01.489 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:01.489 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:01.489 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:01.489 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:01.489 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:01.489 00:02:01.489 00:02:01.489 Message: 00:02:01.489 =============== 00:02:01.489 Drivers Enabled 00:02:01.489 =============== 00:02:01.489 00:02:01.489 common: 00:02:01.489 00:02:01.489 bus: 00:02:01.489 pci, vdev, 00:02:01.489 mempool: 00:02:01.489 ring, 00:02:01.489 dma: 00:02:01.489 00:02:01.489 net: 00:02:01.489 i40e, 00:02:01.489 raw: 00:02:01.489 00:02:01.489 crypto: 00:02:01.489 00:02:01.489 compress: 00:02:01.489 00:02:01.489 regex: 00:02:01.489 00:02:01.489 ml: 00:02:01.489 00:02:01.489 vdpa: 00:02:01.489 00:02:01.489 event: 00:02:01.489 00:02:01.489 baseband: 00:02:01.489 00:02:01.489 gpu: 00:02:01.489 00:02:01.489 00:02:01.489 Message: 00:02:01.489 ================= 00:02:01.489 Content Skipped 00:02:01.489 ================= 00:02:01.489 00:02:01.489 apps: 00:02:01.489 00:02:01.489 libs: 00:02:01.489 00:02:01.489 drivers: 00:02:01.489 common/cpt: not in enabled drivers build config 00:02:01.489 common/dpaax: not in enabled drivers build config 00:02:01.489 common/iavf: not in enabled drivers build config 00:02:01.489 common/idpf: not in enabled drivers build config 00:02:01.489 common/mvep: not in enabled drivers build config 00:02:01.489 common/octeontx: not in enabled drivers build config 00:02:01.489 bus/auxiliary: not in enabled drivers build config 00:02:01.489 bus/cdx: not in enabled drivers build config 00:02:01.489 bus/dpaa: not in enabled drivers build config 00:02:01.489 bus/fslmc: not in enabled drivers build config 00:02:01.489 bus/ifpga: not in enabled drivers build config 00:02:01.489 bus/platform: not in enabled drivers build config 00:02:01.489 bus/vmbus: not in enabled drivers build config 00:02:01.489 common/cnxk: not in enabled drivers build config 00:02:01.489 common/mlx5: not in enabled drivers build config 00:02:01.489 common/nfp: not in enabled drivers build config 00:02:01.489 common/qat: not in enabled drivers build config 00:02:01.489 common/sfc_efx: not in enabled drivers build config 00:02:01.489 mempool/bucket: not in enabled drivers build config 00:02:01.489 mempool/cnxk: not in enabled drivers build config 00:02:01.489 mempool/dpaa: not in enabled drivers build config 00:02:01.489 mempool/dpaa2: not in enabled drivers build config 00:02:01.489 mempool/octeontx: not in enabled drivers build config 00:02:01.489 mempool/stack: not in enabled drivers build config 00:02:01.489 dma/cnxk: not in enabled drivers build config 00:02:01.489 dma/dpaa: not in enabled drivers build config 00:02:01.489 dma/dpaa2: not in enabled drivers build config 00:02:01.489 dma/hisilicon: not in enabled drivers build config 00:02:01.489 dma/idxd: not in enabled drivers build config 00:02:01.489 dma/ioat: not in enabled drivers build config 00:02:01.489 dma/skeleton: not in enabled drivers build config 00:02:01.489 net/af_packet: not in enabled drivers build config 00:02:01.489 net/af_xdp: not in enabled drivers build config 00:02:01.489 net/ark: not in enabled drivers build config 00:02:01.489 net/atlantic: not in enabled drivers build config 00:02:01.489 net/avp: not in enabled drivers build config 00:02:01.489 net/axgbe: not in enabled drivers build config 00:02:01.489 net/bnx2x: not in enabled drivers build config 00:02:01.489 net/bnxt: not in enabled drivers build config 00:02:01.489 net/bonding: not in enabled drivers build config 00:02:01.489 net/cnxk: not in enabled drivers build config 00:02:01.489 net/cpfl: not in enabled drivers build config 00:02:01.489 net/cxgbe: not in enabled drivers build config 00:02:01.489 net/dpaa: not in enabled drivers build config 00:02:01.489 net/dpaa2: not in enabled drivers build config 00:02:01.489 net/e1000: not in enabled drivers build config 00:02:01.489 net/ena: not in enabled drivers build config 00:02:01.489 net/enetc: not in enabled drivers build config 00:02:01.489 net/enetfec: not in enabled drivers build config 00:02:01.489 net/enic: not in enabled drivers build config 00:02:01.489 net/failsafe: not in enabled drivers build config 00:02:01.489 net/fm10k: not in enabled drivers build config 00:02:01.489 net/gve: not in enabled drivers build config 00:02:01.489 net/hinic: not in enabled drivers build config 00:02:01.489 net/hns3: not in enabled drivers build config 00:02:01.489 net/iavf: not in enabled drivers build config 00:02:01.489 net/ice: not in enabled drivers build config 00:02:01.489 net/idpf: not in enabled drivers build config 00:02:01.489 net/igc: not in enabled drivers build config 00:02:01.489 net/ionic: not in enabled drivers build config 00:02:01.489 net/ipn3ke: not in enabled drivers build config 00:02:01.489 net/ixgbe: not in enabled drivers build config 00:02:01.489 net/mana: not in enabled drivers build config 00:02:01.490 net/memif: not in enabled drivers build config 00:02:01.490 net/mlx4: not in enabled drivers build config 00:02:01.490 net/mlx5: not in enabled drivers build config 00:02:01.490 net/mvneta: not in enabled drivers build config 00:02:01.490 net/mvpp2: not in enabled drivers build config 00:02:01.490 net/netvsc: not in enabled drivers build config 00:02:01.490 net/nfb: not in enabled drivers build config 00:02:01.490 net/nfp: not in enabled drivers build config 00:02:01.490 net/ngbe: not in enabled drivers build config 00:02:01.490 net/null: not in enabled drivers build config 00:02:01.490 net/octeontx: not in enabled drivers build config 00:02:01.490 net/octeon_ep: not in enabled drivers build config 00:02:01.490 net/pcap: not in enabled drivers build config 00:02:01.490 net/pfe: not in enabled drivers build config 00:02:01.490 net/qede: not in enabled drivers build config 00:02:01.490 net/ring: not in enabled drivers build config 00:02:01.490 net/sfc: not in enabled drivers build config 00:02:01.490 net/softnic: not in enabled drivers build config 00:02:01.490 net/tap: not in enabled drivers build config 00:02:01.490 net/thunderx: not in enabled drivers build config 00:02:01.490 net/txgbe: not in enabled drivers build config 00:02:01.490 net/vdev_netvsc: not in enabled drivers build config 00:02:01.490 net/vhost: not in enabled drivers build config 00:02:01.490 net/virtio: not in enabled drivers build config 00:02:01.490 net/vmxnet3: not in enabled drivers build config 00:02:01.490 raw/cnxk_bphy: not in enabled drivers build config 00:02:01.490 raw/cnxk_gpio: not in enabled drivers build config 00:02:01.490 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:01.490 raw/ifpga: not in enabled drivers build config 00:02:01.490 raw/ntb: not in enabled drivers build config 00:02:01.490 raw/skeleton: not in enabled drivers build config 00:02:01.490 crypto/armv8: not in enabled drivers build config 00:02:01.490 crypto/bcmfs: not in enabled drivers build config 00:02:01.490 crypto/caam_jr: not in enabled drivers build config 00:02:01.490 crypto/ccp: not in enabled drivers build config 00:02:01.490 crypto/cnxk: not in enabled drivers build config 00:02:01.490 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.490 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.490 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.490 crypto/mlx5: not in enabled drivers build config 00:02:01.490 crypto/mvsam: not in enabled drivers build config 00:02:01.490 crypto/nitrox: not in enabled drivers build config 00:02:01.490 crypto/null: not in enabled drivers build config 00:02:01.490 crypto/octeontx: not in enabled drivers build config 00:02:01.490 crypto/openssl: not in enabled drivers build config 00:02:01.490 crypto/scheduler: not in enabled drivers build config 00:02:01.490 crypto/uadk: not in enabled drivers build config 00:02:01.490 crypto/virtio: not in enabled drivers build config 00:02:01.490 compress/isal: not in enabled drivers build config 00:02:01.490 compress/mlx5: not in enabled drivers build config 00:02:01.490 compress/octeontx: not in enabled drivers build config 00:02:01.490 compress/zlib: not in enabled drivers build config 00:02:01.490 regex/mlx5: not in enabled drivers build config 00:02:01.490 regex/cn9k: not in enabled drivers build config 00:02:01.490 ml/cnxk: not in enabled drivers build config 00:02:01.490 vdpa/ifc: not in enabled drivers build config 00:02:01.490 vdpa/mlx5: not in enabled drivers build config 00:02:01.490 vdpa/nfp: not in enabled drivers build config 00:02:01.490 vdpa/sfc: not in enabled drivers build config 00:02:01.490 event/cnxk: not in enabled drivers build config 00:02:01.490 event/dlb2: not in enabled drivers build config 00:02:01.490 event/dpaa: not in enabled drivers build config 00:02:01.490 event/dpaa2: not in enabled drivers build config 00:02:01.490 event/dsw: not in enabled drivers build config 00:02:01.490 event/opdl: not in enabled drivers build config 00:02:01.490 event/skeleton: not in enabled drivers build config 00:02:01.490 event/sw: not in enabled drivers build config 00:02:01.490 event/octeontx: not in enabled drivers build config 00:02:01.490 baseband/acc: not in enabled drivers build config 00:02:01.490 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:01.490 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:01.490 baseband/la12xx: not in enabled drivers build config 00:02:01.490 baseband/null: not in enabled drivers build config 00:02:01.490 baseband/turbo_sw: not in enabled drivers build config 00:02:01.490 gpu/cuda: not in enabled drivers build config 00:02:01.490 00:02:01.490 00:02:01.490 Build targets in project: 217 00:02:01.490 00:02:01.490 DPDK 23.11.0 00:02:01.490 00:02:01.490 User defined options 00:02:01.490 libdir : lib 00:02:01.490 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.490 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:01.490 c_link_args : 00:02:01.490 enable_docs : false 00:02:01.490 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:01.490 enable_kmods : false 00:02:01.490 machine : native 00:02:01.490 tests : false 00:02:01.490 00:02:01.490 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.490 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:01.490 16:07:49 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:01.490 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:01.490 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.490 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.490 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.490 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.490 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.490 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.490 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.490 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.490 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.490 [10/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.490 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.490 [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.490 [13/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.490 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.490 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.490 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.490 [17/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.490 [18/707] Linking static target lib/librte_kvargs.a 00:02:01.490 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.490 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.748 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.748 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.748 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.748 [24/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.748 [25/707] Linking static target lib/librte_log.a 00:02:01.748 [26/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.748 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.748 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.748 [29/707] Linking static target lib/librte_pci.a 00:02:01.748 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.748 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.748 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.749 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.749 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.749 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.012 [36/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.012 [37/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:02.012 [38/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.012 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.012 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.012 [41/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.012 [42/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.012 [43/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.012 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.012 [45/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.012 [46/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:02.012 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.012 [48/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:02.012 [49/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:02.012 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.277 [51/707] Linking static target lib/librte_meter.a 00:02:02.277 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.277 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.277 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.277 [55/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:02.277 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.277 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.277 [58/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.277 [59/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.277 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.277 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.277 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.277 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.277 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.277 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.277 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.277 [67/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:02.277 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.277 [69/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:02.277 [70/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.277 [71/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:02.277 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.277 [73/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.277 [74/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.277 [75/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:02.277 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.277 [77/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.277 [78/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:02.277 [79/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.277 [80/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:02.277 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.277 [82/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:02.277 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.277 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.277 [85/707] Linking static target lib/librte_cmdline.a 00:02:02.277 [86/707] Linking static target lib/librte_ring.a 00:02:02.277 [87/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:02.277 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:02.277 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.277 [90/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.277 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.277 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.539 [93/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.539 [94/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:02.539 [95/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.539 [96/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.539 [97/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.539 [98/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:02.539 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.539 [100/707] Linking static target lib/librte_metrics.a 00:02:02.539 [101/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:02.539 [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.539 [103/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.539 [104/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:02.539 [105/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:02.539 [106/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:02.539 [107/707] Linking static target lib/librte_net.a 00:02:02.539 [108/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.539 [109/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.539 [110/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.539 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.539 [112/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.539 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.539 [114/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:02.539 [115/707] Linking target lib/librte_log.so.24.0 00:02:02.805 [116/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:02.805 [117/707] Linking static target lib/librte_cfgfile.a 00:02:02.805 [118/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.805 [119/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:02.805 [120/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.805 [121/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:02.805 [122/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.805 [123/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:02.805 [124/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.805 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:02.805 [126/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:02.805 [127/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.805 [128/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:02.805 [129/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.805 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.805 [131/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.805 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.805 [133/707] Linking static target lib/librte_mempool.a 00:02:02.805 [134/707] Linking static target lib/librte_bitratestats.a 00:02:02.805 [135/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:02.805 [136/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.805 [137/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.805 [138/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:02.805 [139/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:02.805 [140/707] Linking target lib/librte_kvargs.so.24.0 00:02:03.068 [141/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.068 [142/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.068 [143/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.068 [144/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.068 [145/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:03.068 [146/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.068 [147/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:03.068 [148/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.068 [149/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:03.068 [150/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.068 [151/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.068 [152/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:03.068 [153/707] Linking static target lib/librte_timer.a 00:02:03.068 [154/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:03.068 [155/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:03.068 [156/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.068 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:03.068 [158/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.068 [159/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:03.068 [160/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:03.333 [161/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.333 [162/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:03.333 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:03.333 [164/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.333 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.333 [166/707] Linking static target lib/librte_compressdev.a 00:02:03.333 [167/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.333 [168/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.333 [169/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:03.333 [170/707] Linking static target lib/librte_rcu.a 00:02:03.333 [171/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:03.333 [172/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:03.333 [173/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:03.333 [174/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.333 [175/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.333 [176/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:03.333 [177/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:03.333 [178/707] Linking static target lib/librte_jobstats.a 00:02:03.333 [179/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.333 [180/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:03.333 [181/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:03.333 [182/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:03.333 [183/707] Linking static target lib/librte_dispatcher.a 00:02:03.333 [184/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:03.333 [185/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:03.333 [186/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.595 [187/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:03.595 [188/707] Linking static target lib/librte_gpudev.a 00:02:03.595 [189/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:03.595 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:03.595 [191/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:03.595 [192/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.595 [193/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:03.595 [194/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.595 [195/707] Linking static target lib/librte_latencystats.a 00:02:03.595 [196/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:03.595 [197/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.595 [198/707] Linking static target lib/librte_bbdev.a 00:02:03.595 [199/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:03.595 [200/707] Linking static target lib/librte_dmadev.a 00:02:03.595 [201/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.595 [202/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:03.595 [203/707] Linking static target lib/librte_mbuf.a 00:02:03.595 [204/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.595 [205/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:03.595 [206/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:03.595 [207/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:03.595 [208/707] Linking static target lib/librte_gro.a 00:02:03.595 [209/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:03.595 [210/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:03.595 [211/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:03.595 [212/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:03.595 [213/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.595 [214/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.595 [215/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.595 [216/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:03.595 [217/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.595 [218/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:03.595 [219/707] Linking static target lib/librte_gso.a 00:02:03.862 [220/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:03.862 [221/707] Linking static target lib/librte_eal.a 00:02:03.862 [222/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:03.862 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:03.862 [224/707] Linking static target lib/librte_distributor.a 00:02:03.862 [225/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:03.862 [226/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:03.862 [227/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.862 [228/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.862 [229/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:03.862 [230/707] Linking static target lib/librte_ip_frag.a 00:02:03.862 [231/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:03.862 [232/707] Linking static target lib/librte_telemetry.a 00:02:03.862 [233/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:03.862 [234/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:03.862 [235/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.862 [236/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:03.862 [237/707] Linking static target lib/librte_stack.a 00:02:03.862 [238/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:03.862 [239/707] Linking static target lib/librte_regexdev.a 00:02:03.862 [240/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.862 [241/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.862 [242/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.862 [243/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.862 [244/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:03.862 [245/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.862 [246/707] Linking static target lib/librte_rawdev.a 00:02:03.862 [247/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.862 [248/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.862 [249/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:04.130 [250/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:04.130 [251/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:04.130 [252/707] Linking static target lib/librte_mldev.a 00:02:04.130 [253/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:04.130 [254/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:04.130 [255/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.130 [256/707] Linking static target lib/librte_bpf.a 00:02:04.130 [257/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:04.130 [258/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.130 [259/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.130 [260/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.130 [261/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:04.130 [262/707] Linking static target lib/librte_power.a 00:02:04.130 [263/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:04.130 [264/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.130 [265/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:04.130 [266/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:04.130 [267/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.130 [268/707] Linking static target lib/librte_pcapng.a 00:02:04.130 [269/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:04.130 [270/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.130 [271/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.130 [272/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:04.130 [273/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.130 [274/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.130 [275/707] Linking static target lib/librte_reorder.a 00:02:04.130 [276/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.391 [277/707] Linking static target lib/librte_security.a 00:02:04.391 [278/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.391 [279/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.391 [280/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:04.391 [281/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:04.391 [282/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:04.391 [283/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:04.391 [284/707] Linking static target lib/librte_lpm.a 00:02:04.391 [285/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:04.391 [286/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:04.391 [287/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.391 [288/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:04.391 [289/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:04.391 [290/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:04.391 [291/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:04.662 [292/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [293/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:04.662 [294/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:04.662 [295/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [296/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:04.662 [297/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [298/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [299/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.662 [300/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:04.662 [301/707] Linking static target lib/librte_rib.a 00:02:04.662 [302/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:04.662 [303/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.662 [304/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:04.662 [305/707] Linking target lib/librte_telemetry.so.24.0 00:02:04.662 [306/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.662 [307/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:04.662 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:04.662 [309/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:04.662 [310/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:04.929 [311/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:04.929 [312/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:04.929 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:04.929 [314/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.929 [315/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:04.929 [316/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:04.929 [317/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.929 [318/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:04.929 [319/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:04.929 [320/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:04.929 [321/707] Linking static target lib/librte_efd.a 00:02:04.929 [322/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:04.929 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:04.929 [324/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:04.929 [325/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:04.929 [326/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.929 [327/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:04.929 [328/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:04.929 [329/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.929 [330/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:04.929 [331/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:04.929 [332/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:04.929 [333/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:04.929 [334/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:05.192 [335/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:05.192 [336/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.192 [337/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:05.192 [338/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:05.192 [339/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:05.192 [340/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:05.192 [341/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:05.192 [342/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.192 [343/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.192 [344/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:05.192 [345/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:05.192 [346/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:05.192 [347/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:05.192 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:05.192 [349/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:05.192 [350/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:05.192 [351/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:05.192 [352/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:05.461 [353/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.461 [354/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:05.461 [355/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:05.461 [356/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:05.461 [357/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:05.462 [358/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:05.462 [359/707] Linking static target lib/librte_fib.a 00:02:05.462 [360/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:05.462 [361/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:05.462 [362/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.462 [363/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:05.462 [364/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:05.462 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:05.462 [366/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:05.462 [367/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.462 [368/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:05.462 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.462 [370/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:05.722 [371/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:05.722 [372/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.722 [373/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.722 [374/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:05.722 [375/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.722 [376/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:05.722 [377/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:05.722 [378/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.722 [379/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:05.722 [380/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:05.722 [381/707] Linking static target lib/librte_graph.a 00:02:05.989 [382/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.989 [383/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:05.989 [384/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:05.989 [385/707] Linking static target lib/librte_pdump.a 00:02:05.989 [386/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.989 [387/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.989 [388/707] Linking static target lib/librte_cryptodev.a 00:02:05.989 [389/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:05.989 [390/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:05.989 [391/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:05.989 [392/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:05.989 [393/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:05.989 [394/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:05.989 [395/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:05.989 [396/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:05.989 [397/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:05.989 [398/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:05.989 [399/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:05.989 [400/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:05.989 [401/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:05.989 [402/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.989 [403/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:05.989 [404/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.989 [405/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:05.989 [406/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:05.989 [407/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.250 [408/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:06.250 [409/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:06.250 [410/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:06.250 [411/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.250 [412/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.250 [413/707] Linking static target drivers/librte_bus_vdev.a 00:02:06.250 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:06.250 [415/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:06.250 [416/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:06.250 [417/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.250 [418/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:06.250 [419/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:06.250 [420/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:06.250 [421/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:06.250 [422/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:06.250 [423/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.250 [424/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:06.250 [425/707] Linking static target lib/librte_sched.a 00:02:06.250 [426/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:06.250 [427/707] Linking static target lib/librte_table.a 00:02:06.250 [428/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.517 [429/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:06.517 [430/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:06.517 [431/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:06.517 [432/707] Linking static target lib/librte_member.a 00:02:06.517 [433/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:06.517 [434/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:06.517 [435/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:06.517 [436/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:06.517 [437/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:06.517 [438/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:06.517 [439/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.517 [440/707] Linking static target lib/librte_hash.a 00:02:06.517 [441/707] Linking static target drivers/librte_bus_pci.a 00:02:06.517 [442/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.517 [443/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:06.517 [444/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:06.517 [445/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:06.517 [446/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:06.517 [447/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.779 [448/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:06.779 [449/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:06.779 [450/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:06.779 [451/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:06.779 [452/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:06.779 [453/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:06.779 [454/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:06.779 [455/707] Linking static target lib/librte_ipsec.a 00:02:06.779 [456/707] Linking static target lib/librte_pdcp.a 00:02:06.779 [457/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:06.779 [458/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:06.779 [459/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:06.779 [460/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:06.779 [461/707] Linking static target lib/librte_node.a 00:02:06.779 [462/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.779 [463/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:06.779 [464/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.041 [465/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:07.041 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:07.041 [467/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.041 [468/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:07.041 [469/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:07.041 [470/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:07.041 [471/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:07.041 [472/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:07.041 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:07.041 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:07.041 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:07.041 [476/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.041 [477/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:07.041 [478/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.041 [479/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:07.041 [480/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:07.041 [481/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:07.041 [482/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:07.041 [483/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:07.041 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:07.041 [485/707] Linking static target lib/librte_eventdev.a 00:02:07.041 [486/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:07.301 [487/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:07.301 [488/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:07.301 [489/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:07.301 [490/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:07.301 [491/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:07.301 [492/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.301 [493/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.301 [494/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:07.301 [495/707] Linking static target drivers/librte_mempool_ring.a 00:02:07.301 [496/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:07.301 [497/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:07.301 [498/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:07.301 [499/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:07.301 [500/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:07.301 [501/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:07.301 [502/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.301 [503/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.301 [504/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.301 [505/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:07.301 [506/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:07.301 [507/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:07.301 [508/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.301 [509/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:07.301 [510/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:07.301 [511/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:07.301 [512/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:07.301 [513/707] Linking static target lib/librte_port.a 00:02:07.301 [514/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:07.301 [515/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:07.301 [516/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:07.301 [517/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.560 [518/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:07.560 [519/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:07.560 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:07.560 [521/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.560 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:07.560 [523/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:07.560 [524/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:07.560 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:07.560 [526/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:07.560 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:07.560 [528/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:07.560 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:07.560 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:07.560 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:07.560 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:07.560 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:07.560 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:07.818 [535/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:07.818 [536/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:07.818 [537/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:07.819 [538/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:07.819 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:07.819 [540/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:07.819 [541/707] Linking static target lib/acl/libavx2_tmp.a 00:02:07.819 [542/707] Linking static target lib/librte_acl.a 00:02:07.819 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:07.819 [544/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:07.819 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:07.819 [546/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:07.819 [547/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.819 [548/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:07.819 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:07.819 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:08.078 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:08.078 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:08.078 [553/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:08.078 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:08.078 [555/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.078 [556/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:08.078 [557/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:08.078 [558/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:08.078 [559/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:08.078 [560/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:08.078 [561/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:08.078 [562/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:08.078 [563/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:08.078 [564/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.335 [565/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:08.335 [566/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:08.335 [567/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:08.335 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:08.335 [569/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:08.594 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:08.594 [571/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:08.594 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:08.594 [573/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.594 [574/707] Linking static target lib/librte_ethdev.a 00:02:08.853 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:09.112 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:09.112 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:09.370 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:09.370 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:09.629 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:10.197 [581/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.456 [582/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:10.456 [583/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:10.456 [584/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:10.715 [585/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.715 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:10.715 [587/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:10.715 [588/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:10.715 [589/707] Linking static target drivers/librte_net_i40e.a 00:02:11.650 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:11.650 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.218 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:14.122 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.122 [594/707] Linking target lib/librte_eal.so.24.0 00:02:14.381 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:14.381 [596/707] Linking target lib/librte_meter.so.24.0 00:02:14.381 [597/707] Linking target lib/librte_ring.so.24.0 00:02:14.381 [598/707] Linking target lib/librte_jobstats.so.24.0 00:02:14.381 [599/707] Linking target lib/librte_cfgfile.so.24.0 00:02:14.381 [600/707] Linking target lib/librte_timer.so.24.0 00:02:14.381 [601/707] Linking target lib/librte_dmadev.so.24.0 00:02:14.381 [602/707] Linking target lib/librte_pci.so.24.0 00:02:14.381 [603/707] Linking target lib/librte_stack.so.24.0 00:02:14.381 [604/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:14.381 [605/707] Linking target lib/librte_rawdev.so.24.0 00:02:14.381 [606/707] Linking target lib/librte_acl.so.24.0 00:02:14.381 [607/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:14.381 [608/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:14.381 [609/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:14.381 [610/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:14.381 [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:14.381 [612/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:14.381 [613/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:14.640 [614/707] Linking target lib/librte_rcu.so.24.0 00:02:14.640 [615/707] Linking target lib/librte_mempool.so.24.0 00:02:14.641 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:14.641 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:14.641 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:14.641 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:14.641 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:14.641 [621/707] Linking target lib/librte_rib.so.24.0 00:02:14.641 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:14.899 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:14.899 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:14.899 [625/707] Linking target lib/librte_net.so.24.0 00:02:14.899 [626/707] Linking target lib/librte_distributor.so.24.0 00:02:14.899 [627/707] Linking target lib/librte_reorder.so.24.0 00:02:14.899 [628/707] Linking target lib/librte_compressdev.so.24.0 00:02:14.899 [629/707] Linking target lib/librte_gpudev.so.24.0 00:02:14.899 [630/707] Linking target lib/librte_regexdev.so.24.0 00:02:14.899 [631/707] Linking target lib/librte_bbdev.so.24.0 00:02:14.899 [632/707] Linking target lib/librte_mldev.so.24.0 00:02:14.899 [633/707] Linking target lib/librte_sched.so.24.0 00:02:14.899 [634/707] Linking target lib/librte_cryptodev.so.24.0 00:02:14.899 [635/707] Linking target lib/librte_fib.so.24.0 00:02:14.899 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:14.899 [637/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:14.899 [638/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:14.899 [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:15.158 [640/707] Linking target lib/librte_hash.so.24.0 00:02:15.158 [641/707] Linking target lib/librte_security.so.24.0 00:02:15.158 [642/707] Linking target lib/librte_cmdline.so.24.0 00:02:15.158 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:15.158 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:15.158 [645/707] Linking target lib/librte_efd.so.24.0 00:02:15.158 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:15.158 [647/707] Linking target lib/librte_member.so.24.0 00:02:15.158 [648/707] Linking target lib/librte_pdcp.so.24.0 00:02:15.158 [649/707] Linking target lib/librte_ipsec.so.24.0 00:02:15.417 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:15.417 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:15.985 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.985 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:16.244 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:16.244 [655/707] Linking target lib/librte_metrics.so.24.0 00:02:16.244 [656/707] Linking target lib/librte_gro.so.24.0 00:02:16.244 [657/707] Linking target lib/librte_power.so.24.0 00:02:16.244 [658/707] Linking target lib/librte_bpf.so.24.0 00:02:16.244 [659/707] Linking target lib/librte_pcapng.so.24.0 00:02:16.244 [660/707] Linking target lib/librte_gso.so.24.0 00:02:16.244 [661/707] Linking target lib/librte_ip_frag.so.24.0 00:02:16.244 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:16.244 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:16.244 [664/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:16.244 [665/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:16.244 [666/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:16.244 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:16.244 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:16.504 [669/707] Linking target lib/librte_graph.so.24.0 00:02:16.504 [670/707] Linking target lib/librte_pdump.so.24.0 00:02:16.504 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:02:16.504 [672/707] Linking target lib/librte_latencystats.so.24.0 00:02:16.504 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:02:16.504 [674/707] Linking target lib/librte_port.so.24.0 00:02:16.504 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:16.504 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:16.504 [677/707] Linking target lib/librte_node.so.24.0 00:02:16.504 [678/707] Linking target lib/librte_table.so.24.0 00:02:16.762 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:18.139 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:18.139 [681/707] Linking static target lib/librte_pipeline.a 00:02:19.516 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.516 [683/707] Linking static target lib/librte_vhost.a 00:02:19.775 [684/707] Linking target app/dpdk-pdump 00:02:19.775 [685/707] Linking target app/dpdk-test-acl 00:02:19.775 [686/707] Linking target app/dpdk-test-cmdline 00:02:19.775 [687/707] Linking target app/dpdk-test-dma-perf 00:02:19.775 [688/707] Linking target app/dpdk-test-pipeline 00:02:19.775 [689/707] Linking target app/dpdk-test-compress-perf 00:02:19.776 [690/707] Linking target app/dpdk-test-mldev 00:02:19.776 [691/707] Linking target app/dpdk-test-flow-perf 00:02:19.776 [692/707] Linking target app/dpdk-test-crypto-perf 00:02:19.776 [693/707] Linking target app/dpdk-test-eventdev 00:02:19.776 [694/707] Linking target app/dpdk-test-security-perf 00:02:19.776 [695/707] Linking target app/dpdk-test-sad 00:02:19.776 [696/707] Linking target app/dpdk-test-gpudev 00:02:19.776 [697/707] Linking target app/dpdk-dumpcap 00:02:19.776 [698/707] Linking target app/dpdk-test-fib 00:02:19.776 [699/707] Linking target app/dpdk-test-regex 00:02:19.776 [700/707] Linking target app/dpdk-proc-info 00:02:19.776 [701/707] Linking target app/dpdk-graph 00:02:19.776 [702/707] Linking target app/dpdk-test-bbdev 00:02:19.776 [703/707] Linking target app/dpdk-testpmd 00:02:21.156 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.156 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:23.064 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.064 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:23.064 16:08:11 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:23.064 16:08:11 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:23.064 16:08:11 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:23.323 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:23.323 [0/1] Installing files. 00:02:23.587 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.587 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.588 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:23.589 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:23.590 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.591 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:23.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:23.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:23.593 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.593 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:23.857 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:23.857 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:23.857 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:23.857 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:23.858 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:23.862 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:23.862 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:23.862 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:23.862 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:23.862 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:23.862 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:23.862 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:23.862 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:23.862 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:23.862 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:23.862 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:23.862 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:23.862 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:23.862 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:23.862 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:23.862 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:23.862 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:23.862 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:23.862 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:23.862 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:23.862 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:23.862 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:23.862 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:23.862 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:23.862 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:23.862 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:23.862 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:23.862 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:23.862 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:23.862 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:23.862 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:23.862 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:23.862 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:23.862 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:23.862 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:23.862 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:23.862 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:23.862 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:23.862 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:23.862 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:23.862 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:23.862 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:23.862 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:23.862 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:23.862 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:23.862 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:23.862 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:23.862 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:23.862 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:23.862 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:23.862 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:23.862 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:23.862 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:23.862 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:23.862 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:23.862 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:23.862 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:23.862 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:23.862 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:23.862 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:23.862 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:23.862 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:23.862 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:23.862 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:23.862 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:23.862 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:23.862 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:23.862 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:23.862 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:23.862 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:23.862 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:23.863 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:23.863 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:23.863 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:23.863 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:23.863 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:23.863 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:23.863 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:23.863 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:23.863 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:23.863 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:23.863 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:23.863 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:23.863 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:23.863 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:23.863 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:23.863 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:23.863 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:23.863 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:23.863 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:23.863 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:23.863 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:23.863 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:23.863 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:23.863 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:23.863 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:23.863 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:23.863 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:23.863 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:23.863 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:23.863 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:23.863 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:23.863 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:23.863 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:23.863 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:23.863 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:23.863 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:23.863 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:23.863 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:23.863 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:23.863 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:23.863 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:23.863 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:23.863 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:23.863 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:23.863 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:23.863 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:23.863 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:23.863 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:23.863 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:23.863 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:23.863 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:23.863 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:23.863 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:23.863 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:23.863 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:23.863 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:23.863 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:23.863 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:23.863 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:23.863 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:23.863 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:23.863 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:23.863 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:23.863 16:08:12 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:23.863 16:08:12 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.863 00:02:23.863 real 0m28.710s 00:02:23.863 user 9m21.186s 00:02:23.863 sys 2m8.572s 00:02:23.863 16:08:12 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:23.863 16:08:12 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:23.863 ************************************ 00:02:23.863 END TEST build_native_dpdk 00:02:23.863 ************************************ 00:02:23.863 16:08:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:23.863 16:08:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:23.863 16:08:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:23.863 16:08:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:23.863 16:08:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:23.863 16:08:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:23.863 16:08:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:23.863 16:08:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:24.123 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:24.123 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:24.123 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:24.382 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:24.641 Using 'verbs' RDMA provider 00:02:37.424 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:49.640 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:49.640 Creating mk/config.mk...done. 00:02:49.640 Creating mk/cc.flags.mk...done. 00:02:49.640 Type 'make' to build. 00:02:49.640 16:08:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:49.640 16:08:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:49.640 16:08:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:49.640 16:08:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.640 ************************************ 00:02:49.640 START TEST make 00:02:49.640 ************************************ 00:02:49.640 16:08:38 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:51.566 The Meson build system 00:02:51.566 Version: 1.5.0 00:02:51.566 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:51.566 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.566 Build type: native build 00:02:51.566 Project name: libvfio-user 00:02:51.566 Project version: 0.0.1 00:02:51.566 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:51.566 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:51.566 Host machine cpu family: x86_64 00:02:51.566 Host machine cpu: x86_64 00:02:51.566 Run-time dependency threads found: YES 00:02:51.566 Library dl found: YES 00:02:51.566 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:51.566 Run-time dependency json-c found: YES 0.17 00:02:51.566 Run-time dependency cmocka found: YES 1.1.7 00:02:51.566 Program pytest-3 found: NO 00:02:51.566 Program flake8 found: NO 00:02:51.566 Program misspell-fixer found: NO 00:02:51.566 Program restructuredtext-lint found: NO 00:02:51.566 Program valgrind found: YES (/usr/bin/valgrind) 00:02:51.566 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.566 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.566 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.566 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:51.566 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:51.566 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:51.566 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:51.566 Build targets in project: 8 00:02:51.566 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:51.566 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:51.566 00:02:51.566 libvfio-user 0.0.1 00:02:51.566 00:02:51.566 User defined options 00:02:51.566 buildtype : debug 00:02:51.566 default_library: shared 00:02:51.566 libdir : /usr/local/lib 00:02:51.566 00:02:51.566 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.134 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:52.134 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:52.393 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:52.393 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:52.393 [4/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:52.393 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:52.393 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:52.393 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:52.393 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:52.393 [9/37] Compiling C object samples/null.p/null.c.o 00:02:52.393 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:52.393 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:52.393 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:52.393 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:52.393 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:52.393 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:52.393 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:52.393 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:52.393 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:52.393 [19/37] Compiling C object samples/server.p/server.c.o 00:02:52.393 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:52.393 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:52.393 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:52.393 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:52.393 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:52.393 [25/37] Compiling C object samples/client.p/client.c.o 00:02:52.393 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:52.393 [27/37] Linking target samples/client 00:02:52.393 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:52.393 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:52.393 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:52.651 [31/37] Linking target test/unit_tests 00:02:52.651 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:52.651 [33/37] Linking target samples/lspci 00:02:52.651 [34/37] Linking target samples/server 00:02:52.651 [35/37] Linking target samples/null 00:02:52.651 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:52.651 [37/37] Linking target samples/gpio-pci-idio-16 00:02:52.651 INFO: autodetecting backend as ninja 00:02:52.651 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:52.651 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.219 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:53.219 ninja: no work to do. 00:03:19.774 CC lib/log/log.o 00:03:19.774 CC lib/ut/ut.o 00:03:19.774 CC lib/log/log_flags.o 00:03:19.774 CC lib/ut_mock/mock.o 00:03:19.774 CC lib/log/log_deprecated.o 00:03:20.033 LIB libspdk_ut.a 00:03:20.033 LIB libspdk_log.a 00:03:20.033 LIB libspdk_ut_mock.a 00:03:20.033 SO libspdk_ut.so.2.0 00:03:20.033 SO libspdk_ut_mock.so.6.0 00:03:20.033 SO libspdk_log.so.7.1 00:03:20.033 SYMLINK libspdk_ut.so 00:03:20.033 SYMLINK libspdk_ut_mock.so 00:03:20.033 SYMLINK libspdk_log.so 00:03:20.602 CC lib/dma/dma.o 00:03:20.602 CC lib/util/base64.o 00:03:20.602 CXX lib/trace_parser/trace.o 00:03:20.602 CC lib/util/bit_array.o 00:03:20.602 CC lib/ioat/ioat.o 00:03:20.602 CC lib/util/cpuset.o 00:03:20.602 CC lib/util/crc16.o 00:03:20.602 CC lib/util/crc32.o 00:03:20.602 CC lib/util/crc32c.o 00:03:20.602 CC lib/util/crc32_ieee.o 00:03:20.602 CC lib/util/crc64.o 00:03:20.602 CC lib/util/dif.o 00:03:20.602 CC lib/util/fd.o 00:03:20.602 CC lib/util/fd_group.o 00:03:20.602 CC lib/util/file.o 00:03:20.602 CC lib/util/hexlify.o 00:03:20.602 CC lib/util/iov.o 00:03:20.602 CC lib/util/math.o 00:03:20.602 CC lib/util/net.o 00:03:20.602 CC lib/util/pipe.o 00:03:20.602 CC lib/util/strerror_tls.o 00:03:20.602 CC lib/util/string.o 00:03:20.602 CC lib/util/uuid.o 00:03:20.602 CC lib/util/xor.o 00:03:20.602 CC lib/util/zipf.o 00:03:20.602 CC lib/util/md5.o 00:03:20.602 CC lib/vfio_user/host/vfio_user.o 00:03:20.602 CC lib/vfio_user/host/vfio_user_pci.o 00:03:20.602 LIB libspdk_dma.a 00:03:20.861 SO libspdk_dma.so.5.0 00:03:20.861 SYMLINK libspdk_dma.so 00:03:20.861 LIB libspdk_ioat.a 00:03:20.861 SO libspdk_ioat.so.7.0 00:03:20.861 LIB libspdk_vfio_user.a 00:03:20.861 SYMLINK libspdk_ioat.so 00:03:20.861 SO libspdk_vfio_user.so.5.0 00:03:20.861 SYMLINK libspdk_vfio_user.so 00:03:21.123 LIB libspdk_util.a 00:03:21.123 SO libspdk_util.so.10.1 00:03:21.123 SYMLINK libspdk_util.so 00:03:21.123 LIB libspdk_trace_parser.a 00:03:21.382 SO libspdk_trace_parser.so.6.0 00:03:21.382 SYMLINK libspdk_trace_parser.so 00:03:21.382 CC lib/vmd/vmd.o 00:03:21.641 CC lib/vmd/led.o 00:03:21.641 CC lib/json/json_parse.o 00:03:21.641 CC lib/json/json_util.o 00:03:21.641 CC lib/json/json_write.o 00:03:21.641 CC lib/idxd/idxd.o 00:03:21.641 CC lib/idxd/idxd_user.o 00:03:21.641 CC lib/idxd/idxd_kernel.o 00:03:21.641 CC lib/env_dpdk/env.o 00:03:21.641 CC lib/env_dpdk/memory.o 00:03:21.641 CC lib/env_dpdk/pci.o 00:03:21.641 CC lib/env_dpdk/init.o 00:03:21.641 CC lib/conf/conf.o 00:03:21.641 CC lib/rdma_utils/rdma_utils.o 00:03:21.641 CC lib/env_dpdk/pci_ioat.o 00:03:21.641 CC lib/env_dpdk/threads.o 00:03:21.641 CC lib/env_dpdk/pci_virtio.o 00:03:21.641 CC lib/env_dpdk/pci_vmd.o 00:03:21.641 CC lib/env_dpdk/pci_idxd.o 00:03:21.641 CC lib/env_dpdk/pci_event.o 00:03:21.641 CC lib/env_dpdk/sigbus_handler.o 00:03:21.641 CC lib/env_dpdk/pci_dpdk.o 00:03:21.641 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.641 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.900 LIB libspdk_conf.a 00:03:21.900 LIB libspdk_json.a 00:03:21.900 SO libspdk_conf.so.6.0 00:03:21.900 LIB libspdk_rdma_utils.a 00:03:21.900 SO libspdk_json.so.6.0 00:03:21.900 SO libspdk_rdma_utils.so.1.0 00:03:21.900 SYMLINK libspdk_json.so 00:03:21.900 SYMLINK libspdk_conf.so 00:03:21.900 SYMLINK libspdk_rdma_utils.so 00:03:21.900 LIB libspdk_idxd.a 00:03:22.159 LIB libspdk_vmd.a 00:03:22.159 SO libspdk_idxd.so.12.1 00:03:22.159 SO libspdk_vmd.so.6.0 00:03:22.159 SYMLINK libspdk_idxd.so 00:03:22.159 SYMLINK libspdk_vmd.so 00:03:22.159 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.159 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.159 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.159 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.159 CC lib/rdma_provider/common.o 00:03:22.159 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:22.418 LIB libspdk_jsonrpc.a 00:03:22.418 LIB libspdk_rdma_provider.a 00:03:22.418 SO libspdk_jsonrpc.so.6.0 00:03:22.418 SO libspdk_rdma_provider.so.7.0 00:03:22.418 SYMLINK libspdk_jsonrpc.so 00:03:22.676 SYMLINK libspdk_rdma_provider.so 00:03:22.676 LIB libspdk_env_dpdk.a 00:03:22.676 SO libspdk_env_dpdk.so.15.1 00:03:22.676 SYMLINK libspdk_env_dpdk.so 00:03:22.934 CC lib/rpc/rpc.o 00:03:23.193 LIB libspdk_rpc.a 00:03:23.193 SO libspdk_rpc.so.6.0 00:03:23.193 SYMLINK libspdk_rpc.so 00:03:23.452 CC lib/trace/trace.o 00:03:23.452 CC lib/trace/trace_flags.o 00:03:23.452 CC lib/trace/trace_rpc.o 00:03:23.452 CC lib/notify/notify.o 00:03:23.452 CC lib/keyring/keyring.o 00:03:23.452 CC lib/notify/notify_rpc.o 00:03:23.452 CC lib/keyring/keyring_rpc.o 00:03:23.711 LIB libspdk_notify.a 00:03:23.711 LIB libspdk_keyring.a 00:03:23.711 SO libspdk_notify.so.6.0 00:03:23.711 LIB libspdk_trace.a 00:03:23.711 SO libspdk_keyring.so.2.0 00:03:23.711 SO libspdk_trace.so.11.0 00:03:23.711 SYMLINK libspdk_notify.so 00:03:23.711 SYMLINK libspdk_keyring.so 00:03:23.971 SYMLINK libspdk_trace.so 00:03:24.229 CC lib/thread/thread.o 00:03:24.229 CC lib/sock/sock.o 00:03:24.229 CC lib/thread/iobuf.o 00:03:24.229 CC lib/sock/sock_rpc.o 00:03:24.488 LIB libspdk_sock.a 00:03:24.488 SO libspdk_sock.so.10.0 00:03:24.747 SYMLINK libspdk_sock.so 00:03:25.005 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:25.005 CC lib/nvme/nvme_ctrlr.o 00:03:25.005 CC lib/nvme/nvme_fabric.o 00:03:25.005 CC lib/nvme/nvme_ns_cmd.o 00:03:25.005 CC lib/nvme/nvme_ns.o 00:03:25.005 CC lib/nvme/nvme_qpair.o 00:03:25.005 CC lib/nvme/nvme_pcie_common.o 00:03:25.005 CC lib/nvme/nvme_pcie.o 00:03:25.005 CC lib/nvme/nvme.o 00:03:25.005 CC lib/nvme/nvme_quirks.o 00:03:25.005 CC lib/nvme/nvme_transport.o 00:03:25.005 CC lib/nvme/nvme_discovery.o 00:03:25.005 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:25.005 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:25.005 CC lib/nvme/nvme_tcp.o 00:03:25.005 CC lib/nvme/nvme_opal.o 00:03:25.005 CC lib/nvme/nvme_io_msg.o 00:03:25.005 CC lib/nvme/nvme_poll_group.o 00:03:25.005 CC lib/nvme/nvme_zns.o 00:03:25.005 CC lib/nvme/nvme_stubs.o 00:03:25.005 CC lib/nvme/nvme_auth.o 00:03:25.005 CC lib/nvme/nvme_cuse.o 00:03:25.005 CC lib/nvme/nvme_vfio_user.o 00:03:25.005 CC lib/nvme/nvme_rdma.o 00:03:25.264 LIB libspdk_thread.a 00:03:25.264 SO libspdk_thread.so.11.0 00:03:25.264 SYMLINK libspdk_thread.so 00:03:25.830 CC lib/blob/blobstore.o 00:03:25.830 CC lib/blob/request.o 00:03:25.830 CC lib/blob/zeroes.o 00:03:25.830 CC lib/blob/blob_bs_dev.o 00:03:25.830 CC lib/init/json_config.o 00:03:25.830 CC lib/init/subsystem.o 00:03:25.830 CC lib/init/subsystem_rpc.o 00:03:25.830 CC lib/accel/accel_rpc.o 00:03:25.830 CC lib/accel/accel.o 00:03:25.830 CC lib/init/rpc.o 00:03:25.830 CC lib/accel/accel_sw.o 00:03:25.830 CC lib/vfu_tgt/tgt_endpoint.o 00:03:25.830 CC lib/fsdev/fsdev.o 00:03:25.830 CC lib/vfu_tgt/tgt_rpc.o 00:03:25.830 CC lib/fsdev/fsdev_io.o 00:03:25.830 CC lib/fsdev/fsdev_rpc.o 00:03:25.830 CC lib/virtio/virtio.o 00:03:25.830 CC lib/virtio/virtio_vhost_user.o 00:03:25.830 CC lib/virtio/virtio_vfio_user.o 00:03:25.830 CC lib/virtio/virtio_pci.o 00:03:25.830 LIB libspdk_init.a 00:03:26.089 SO libspdk_init.so.6.0 00:03:26.089 LIB libspdk_vfu_tgt.a 00:03:26.089 LIB libspdk_virtio.a 00:03:26.089 SO libspdk_vfu_tgt.so.3.0 00:03:26.089 SYMLINK libspdk_init.so 00:03:26.089 SO libspdk_virtio.so.7.0 00:03:26.089 SYMLINK libspdk_vfu_tgt.so 00:03:26.089 SYMLINK libspdk_virtio.so 00:03:26.348 LIB libspdk_fsdev.a 00:03:26.348 SO libspdk_fsdev.so.2.0 00:03:26.348 SYMLINK libspdk_fsdev.so 00:03:26.348 CC lib/event/app.o 00:03:26.348 CC lib/event/reactor.o 00:03:26.348 CC lib/event/log_rpc.o 00:03:26.348 CC lib/event/app_rpc.o 00:03:26.348 CC lib/event/scheduler_static.o 00:03:26.607 LIB libspdk_accel.a 00:03:26.607 SO libspdk_accel.so.16.0 00:03:26.607 SYMLINK libspdk_accel.so 00:03:26.607 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:26.607 LIB libspdk_event.a 00:03:26.607 SO libspdk_event.so.14.0 00:03:26.865 LIB libspdk_nvme.a 00:03:26.865 SYMLINK libspdk_event.so 00:03:26.865 SO libspdk_nvme.so.15.0 00:03:26.865 CC lib/bdev/bdev.o 00:03:26.865 CC lib/bdev/bdev_rpc.o 00:03:26.865 CC lib/bdev/bdev_zone.o 00:03:26.865 CC lib/bdev/part.o 00:03:26.865 CC lib/bdev/scsi_nvme.o 00:03:27.124 SYMLINK libspdk_nvme.so 00:03:27.124 LIB libspdk_fuse_dispatcher.a 00:03:27.124 SO libspdk_fuse_dispatcher.so.1.0 00:03:27.382 SYMLINK libspdk_fuse_dispatcher.so 00:03:27.950 LIB libspdk_blob.a 00:03:27.950 SO libspdk_blob.so.12.0 00:03:27.950 SYMLINK libspdk_blob.so 00:03:28.209 CC lib/lvol/lvol.o 00:03:28.209 CC lib/blobfs/blobfs.o 00:03:28.209 CC lib/blobfs/tree.o 00:03:28.782 LIB libspdk_bdev.a 00:03:28.782 LIB libspdk_blobfs.a 00:03:29.049 SO libspdk_bdev.so.17.0 00:03:29.049 SO libspdk_blobfs.so.11.0 00:03:29.049 LIB libspdk_lvol.a 00:03:29.049 SO libspdk_lvol.so.11.0 00:03:29.049 SYMLINK libspdk_bdev.so 00:03:29.049 SYMLINK libspdk_blobfs.so 00:03:29.049 SYMLINK libspdk_lvol.so 00:03:29.315 CC lib/nbd/nbd.o 00:03:29.315 CC lib/nbd/nbd_rpc.o 00:03:29.315 CC lib/nvmf/ctrlr.o 00:03:29.315 CC lib/nvmf/ctrlr_discovery.o 00:03:29.315 CC lib/ublk/ublk.o 00:03:29.315 CC lib/ublk/ublk_rpc.o 00:03:29.315 CC lib/nvmf/subsystem.o 00:03:29.315 CC lib/nvmf/ctrlr_bdev.o 00:03:29.315 CC lib/nvmf/nvmf.o 00:03:29.315 CC lib/nvmf/nvmf_rpc.o 00:03:29.315 CC lib/nvmf/transport.o 00:03:29.315 CC lib/nvmf/tcp.o 00:03:29.315 CC lib/nvmf/stubs.o 00:03:29.315 CC lib/nvmf/mdns_server.o 00:03:29.315 CC lib/scsi/dev.o 00:03:29.315 CC lib/nvmf/vfio_user.o 00:03:29.315 CC lib/scsi/lun.o 00:03:29.315 CC lib/nvmf/rdma.o 00:03:29.315 CC lib/nvmf/auth.o 00:03:29.315 CC lib/scsi/port.o 00:03:29.315 CC lib/scsi/scsi.o 00:03:29.315 CC lib/scsi/scsi_bdev.o 00:03:29.315 CC lib/scsi/scsi_pr.o 00:03:29.315 CC lib/ftl/ftl_core.o 00:03:29.315 CC lib/scsi/scsi_rpc.o 00:03:29.315 CC lib/scsi/task.o 00:03:29.315 CC lib/ftl/ftl_init.o 00:03:29.315 CC lib/ftl/ftl_layout.o 00:03:29.315 CC lib/ftl/ftl_debug.o 00:03:29.315 CC lib/ftl/ftl_io.o 00:03:29.315 CC lib/ftl/ftl_sb.o 00:03:29.315 CC lib/ftl/ftl_l2p.o 00:03:29.315 CC lib/ftl/ftl_l2p_flat.o 00:03:29.315 CC lib/ftl/ftl_nv_cache.o 00:03:29.315 CC lib/ftl/ftl_band.o 00:03:29.315 CC lib/ftl/ftl_band_ops.o 00:03:29.315 CC lib/ftl/ftl_writer.o 00:03:29.315 CC lib/ftl/ftl_rq.o 00:03:29.315 CC lib/ftl/ftl_reloc.o 00:03:29.315 CC lib/ftl/ftl_l2p_cache.o 00:03:29.315 CC lib/ftl/ftl_p2l.o 00:03:29.315 CC lib/ftl/ftl_p2l_log.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.315 CC lib/ftl/utils/ftl_conf.o 00:03:29.315 CC lib/ftl/utils/ftl_md.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.315 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.315 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.315 CC lib/ftl/utils/ftl_mempool.o 00:03:29.315 CC lib/ftl/utils/ftl_property.o 00:03:29.315 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:29.315 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.315 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:29.315 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:29.315 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:29.315 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:29.315 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.315 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:29.315 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.315 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.315 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:29.315 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.315 CC lib/ftl/base/ftl_base_dev.o 00:03:29.315 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.315 CC lib/ftl/ftl_trace.o 00:03:29.315 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:29.918 LIB libspdk_nbd.a 00:03:29.918 SO libspdk_nbd.so.7.0 00:03:29.918 SYMLINK libspdk_nbd.so 00:03:30.199 LIB libspdk_scsi.a 00:03:30.199 SO libspdk_scsi.so.9.0 00:03:30.199 LIB libspdk_ublk.a 00:03:30.199 SYMLINK libspdk_scsi.so 00:03:30.199 SO libspdk_ublk.so.3.0 00:03:30.199 SYMLINK libspdk_ublk.so 00:03:30.199 LIB libspdk_ftl.a 00:03:30.481 SO libspdk_ftl.so.9.0 00:03:30.481 CC lib/iscsi/conn.o 00:03:30.481 CC lib/iscsi/init_grp.o 00:03:30.481 CC lib/iscsi/iscsi.o 00:03:30.481 CC lib/iscsi/param.o 00:03:30.481 CC lib/iscsi/portal_grp.o 00:03:30.481 CC lib/iscsi/tgt_node.o 00:03:30.481 CC lib/iscsi/iscsi_subsystem.o 00:03:30.481 CC lib/iscsi/iscsi_rpc.o 00:03:30.481 CC lib/iscsi/task.o 00:03:30.481 CC lib/vhost/vhost.o 00:03:30.481 CC lib/vhost/vhost_rpc.o 00:03:30.481 CC lib/vhost/vhost_scsi.o 00:03:30.481 CC lib/vhost/vhost_blk.o 00:03:30.481 CC lib/vhost/rte_vhost_user.o 00:03:30.747 SYMLINK libspdk_ftl.so 00:03:31.314 LIB libspdk_nvmf.a 00:03:31.314 SO libspdk_nvmf.so.20.0 00:03:31.314 LIB libspdk_vhost.a 00:03:31.314 SO libspdk_vhost.so.8.0 00:03:31.573 SYMLINK libspdk_vhost.so 00:03:31.573 SYMLINK libspdk_nvmf.so 00:03:31.573 LIB libspdk_iscsi.a 00:03:31.573 SO libspdk_iscsi.so.8.0 00:03:31.832 SYMLINK libspdk_iscsi.so 00:03:32.401 CC module/vfu_device/vfu_virtio.o 00:03:32.401 CC module/vfu_device/vfu_virtio_blk.o 00:03:32.401 CC module/env_dpdk/env_dpdk_rpc.o 00:03:32.401 CC module/vfu_device/vfu_virtio_scsi.o 00:03:32.401 CC module/vfu_device/vfu_virtio_rpc.o 00:03:32.401 CC module/vfu_device/vfu_virtio_fs.o 00:03:32.401 LIB libspdk_env_dpdk_rpc.a 00:03:32.401 CC module/keyring/linux/keyring.o 00:03:32.401 CC module/keyring/linux/keyring_rpc.o 00:03:32.401 CC module/blob/bdev/blob_bdev.o 00:03:32.401 CC module/accel/iaa/accel_iaa_rpc.o 00:03:32.401 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.401 CC module/accel/iaa/accel_iaa.o 00:03:32.401 CC module/accel/error/accel_error.o 00:03:32.401 CC module/accel/error/accel_error_rpc.o 00:03:32.401 CC module/keyring/file/keyring.o 00:03:32.401 CC module/fsdev/aio/fsdev_aio.o 00:03:32.401 CC module/accel/dsa/accel_dsa.o 00:03:32.401 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:32.401 CC module/keyring/file/keyring_rpc.o 00:03:32.401 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.401 CC module/accel/ioat/accel_ioat.o 00:03:32.401 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:32.401 CC module/fsdev/aio/linux_aio_mgr.o 00:03:32.401 CC module/sock/posix/posix.o 00:03:32.401 CC module/accel/ioat/accel_ioat_rpc.o 00:03:32.401 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.401 SO libspdk_env_dpdk_rpc.so.6.0 00:03:32.659 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.659 LIB libspdk_keyring_file.a 00:03:32.659 LIB libspdk_keyring_linux.a 00:03:32.659 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.659 LIB libspdk_scheduler_gscheduler.a 00:03:32.659 SO libspdk_keyring_linux.so.1.0 00:03:32.659 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.659 SO libspdk_keyring_file.so.2.0 00:03:32.659 LIB libspdk_accel_ioat.a 00:03:32.659 LIB libspdk_accel_iaa.a 00:03:32.659 LIB libspdk_accel_error.a 00:03:32.659 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.659 LIB libspdk_scheduler_dynamic.a 00:03:32.659 SO libspdk_accel_error.so.2.0 00:03:32.659 SO libspdk_accel_iaa.so.3.0 00:03:32.659 SO libspdk_accel_ioat.so.6.0 00:03:32.659 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.659 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.659 SYMLINK libspdk_keyring_linux.so 00:03:32.659 LIB libspdk_blob_bdev.a 00:03:32.659 SYMLINK libspdk_keyring_file.so 00:03:32.659 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.659 LIB libspdk_accel_dsa.a 00:03:32.659 SO libspdk_blob_bdev.so.12.0 00:03:32.659 SYMLINK libspdk_accel_error.so 00:03:32.659 SYMLINK libspdk_accel_iaa.so 00:03:32.659 SYMLINK libspdk_accel_ioat.so 00:03:32.659 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.918 SO libspdk_accel_dsa.so.5.0 00:03:32.918 LIB libspdk_vfu_device.a 00:03:32.918 SYMLINK libspdk_blob_bdev.so 00:03:32.918 SYMLINK libspdk_accel_dsa.so 00:03:32.918 SO libspdk_vfu_device.so.3.0 00:03:32.918 SYMLINK libspdk_vfu_device.so 00:03:32.918 LIB libspdk_fsdev_aio.a 00:03:33.176 SO libspdk_fsdev_aio.so.1.0 00:03:33.176 LIB libspdk_sock_posix.a 00:03:33.176 SO libspdk_sock_posix.so.6.0 00:03:33.176 SYMLINK libspdk_fsdev_aio.so 00:03:33.176 SYMLINK libspdk_sock_posix.so 00:03:33.434 CC module/bdev/gpt/vbdev_gpt.o 00:03:33.434 CC module/bdev/gpt/gpt.o 00:03:33.434 CC module/bdev/malloc/bdev_malloc.o 00:03:33.434 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:33.434 CC module/bdev/error/vbdev_error.o 00:03:33.434 CC module/bdev/delay/vbdev_delay.o 00:03:33.434 CC module/bdev/error/vbdev_error_rpc.o 00:03:33.434 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:33.434 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.434 CC module/bdev/raid/bdev_raid.o 00:03:33.434 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.434 CC module/bdev/raid/raid0.o 00:03:33.434 CC module/bdev/nvme/bdev_nvme.o 00:03:33.434 CC module/bdev/null/bdev_null_rpc.o 00:03:33.434 CC module/bdev/null/bdev_null.o 00:03:33.434 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:33.434 CC module/bdev/raid/raid1.o 00:03:33.434 CC module/bdev/nvme/nvme_rpc.o 00:03:33.434 CC module/bdev/raid/concat.o 00:03:33.434 CC module/bdev/lvol/vbdev_lvol.o 00:03:33.434 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:33.434 CC module/bdev/nvme/bdev_mdns_client.o 00:03:33.434 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:33.434 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:33.434 CC module/bdev/nvme/vbdev_opal.o 00:03:33.434 CC module/bdev/passthru/vbdev_passthru.o 00:03:33.434 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.434 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:33.434 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.434 CC module/bdev/split/vbdev_split.o 00:03:33.434 CC module/bdev/split/vbdev_split_rpc.o 00:03:33.434 CC module/blobfs/bdev/blobfs_bdev.o 00:03:33.434 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:33.434 CC module/bdev/aio/bdev_aio.o 00:03:33.434 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.434 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.434 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.434 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.434 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.434 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.434 CC module/bdev/ftl/bdev_ftl.o 00:03:33.434 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.693 LIB libspdk_bdev_gpt.a 00:03:33.693 LIB libspdk_bdev_split.a 00:03:33.693 LIB libspdk_blobfs_bdev.a 00:03:33.693 LIB libspdk_bdev_error.a 00:03:33.693 LIB libspdk_bdev_null.a 00:03:33.693 SO libspdk_bdev_gpt.so.6.0 00:03:33.693 SO libspdk_bdev_split.so.6.0 00:03:33.693 SO libspdk_blobfs_bdev.so.6.0 00:03:33.693 LIB libspdk_bdev_ftl.a 00:03:33.693 SO libspdk_bdev_error.so.6.0 00:03:33.693 SO libspdk_bdev_null.so.6.0 00:03:33.693 SYMLINK libspdk_bdev_gpt.so 00:03:33.693 SO libspdk_bdev_ftl.so.6.0 00:03:33.693 SYMLINK libspdk_bdev_split.so 00:03:33.693 SYMLINK libspdk_blobfs_bdev.so 00:03:33.693 LIB libspdk_bdev_passthru.a 00:03:33.693 LIB libspdk_bdev_aio.a 00:03:33.693 LIB libspdk_bdev_delay.a 00:03:33.693 LIB libspdk_bdev_iscsi.a 00:03:33.693 LIB libspdk_bdev_malloc.a 00:03:33.693 SYMLINK libspdk_bdev_null.so 00:03:33.693 SYMLINK libspdk_bdev_error.so 00:03:33.693 LIB libspdk_bdev_zone_block.a 00:03:33.693 SO libspdk_bdev_passthru.so.6.0 00:03:33.693 SO libspdk_bdev_delay.so.6.0 00:03:33.693 SO libspdk_bdev_aio.so.6.0 00:03:33.693 SYMLINK libspdk_bdev_ftl.so 00:03:33.693 SO libspdk_bdev_malloc.so.6.0 00:03:33.693 SO libspdk_bdev_iscsi.so.6.0 00:03:33.951 SO libspdk_bdev_zone_block.so.6.0 00:03:33.951 SYMLINK libspdk_bdev_passthru.so 00:03:33.951 SYMLINK libspdk_bdev_aio.so 00:03:33.951 SYMLINK libspdk_bdev_iscsi.so 00:03:33.951 SYMLINK libspdk_bdev_delay.so 00:03:33.952 SYMLINK libspdk_bdev_malloc.so 00:03:33.952 SYMLINK libspdk_bdev_zone_block.so 00:03:33.952 LIB libspdk_bdev_virtio.a 00:03:33.952 LIB libspdk_bdev_lvol.a 00:03:33.952 SO libspdk_bdev_virtio.so.6.0 00:03:33.952 SO libspdk_bdev_lvol.so.6.0 00:03:33.952 SYMLINK libspdk_bdev_virtio.so 00:03:33.952 SYMLINK libspdk_bdev_lvol.so 00:03:34.210 LIB libspdk_bdev_raid.a 00:03:34.210 SO libspdk_bdev_raid.so.6.0 00:03:34.469 SYMLINK libspdk_bdev_raid.so 00:03:35.406 LIB libspdk_bdev_nvme.a 00:03:35.406 SO libspdk_bdev_nvme.so.7.1 00:03:35.406 SYMLINK libspdk_bdev_nvme.so 00:03:35.975 CC module/event/subsystems/vmd/vmd.o 00:03:35.975 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.975 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.975 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.234 CC module/event/subsystems/fsdev/fsdev.o 00:03:36.234 CC module/event/subsystems/keyring/keyring.o 00:03:36.234 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:36.234 CC module/event/subsystems/sock/sock.o 00:03:36.234 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.234 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.234 LIB libspdk_event_fsdev.a 00:03:36.234 LIB libspdk_event_vmd.a 00:03:36.234 LIB libspdk_event_scheduler.a 00:03:36.234 LIB libspdk_event_vhost_blk.a 00:03:36.234 LIB libspdk_event_iobuf.a 00:03:36.234 LIB libspdk_event_keyring.a 00:03:36.234 LIB libspdk_event_vfu_tgt.a 00:03:36.234 LIB libspdk_event_sock.a 00:03:36.234 SO libspdk_event_fsdev.so.1.0 00:03:36.234 SO libspdk_event_scheduler.so.4.0 00:03:36.234 SO libspdk_event_vmd.so.6.0 00:03:36.234 SO libspdk_event_vhost_blk.so.3.0 00:03:36.234 SO libspdk_event_keyring.so.1.0 00:03:36.234 SO libspdk_event_iobuf.so.3.0 00:03:36.234 SO libspdk_event_vfu_tgt.so.3.0 00:03:36.234 SO libspdk_event_sock.so.5.0 00:03:36.234 SYMLINK libspdk_event_fsdev.so 00:03:36.234 SYMLINK libspdk_event_scheduler.so 00:03:36.234 SYMLINK libspdk_event_vmd.so 00:03:36.234 SYMLINK libspdk_event_vhost_blk.so 00:03:36.234 SYMLINK libspdk_event_vfu_tgt.so 00:03:36.234 SYMLINK libspdk_event_keyring.so 00:03:36.234 SYMLINK libspdk_event_iobuf.so 00:03:36.493 SYMLINK libspdk_event_sock.so 00:03:36.751 CC module/event/subsystems/accel/accel.o 00:03:36.751 LIB libspdk_event_accel.a 00:03:37.010 SO libspdk_event_accel.so.6.0 00:03:37.010 SYMLINK libspdk_event_accel.so 00:03:37.269 CC module/event/subsystems/bdev/bdev.o 00:03:37.528 LIB libspdk_event_bdev.a 00:03:37.528 SO libspdk_event_bdev.so.6.0 00:03:37.528 SYMLINK libspdk_event_bdev.so 00:03:37.787 CC module/event/subsystems/scsi/scsi.o 00:03:38.046 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.046 CC module/event/subsystems/nbd/nbd.o 00:03:38.046 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.046 CC module/event/subsystems/ublk/ublk.o 00:03:38.046 LIB libspdk_event_nbd.a 00:03:38.046 LIB libspdk_event_ublk.a 00:03:38.046 LIB libspdk_event_scsi.a 00:03:38.046 SO libspdk_event_nbd.so.6.0 00:03:38.046 SO libspdk_event_ublk.so.3.0 00:03:38.046 SO libspdk_event_scsi.so.6.0 00:03:38.046 LIB libspdk_event_nvmf.a 00:03:38.046 SYMLINK libspdk_event_ublk.so 00:03:38.046 SYMLINK libspdk_event_nbd.so 00:03:38.046 SO libspdk_event_nvmf.so.6.0 00:03:38.046 SYMLINK libspdk_event_scsi.so 00:03:38.305 SYMLINK libspdk_event_nvmf.so 00:03:38.564 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.564 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.564 LIB libspdk_event_vhost_scsi.a 00:03:38.823 SO libspdk_event_vhost_scsi.so.3.0 00:03:38.823 LIB libspdk_event_iscsi.a 00:03:38.823 SO libspdk_event_iscsi.so.6.0 00:03:38.823 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.823 SYMLINK libspdk_event_iscsi.so 00:03:39.082 SO libspdk.so.6.0 00:03:39.082 SYMLINK libspdk.so 00:03:39.340 CXX app/trace/trace.o 00:03:39.340 CC app/spdk_nvme_perf/perf.o 00:03:39.340 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.341 CC app/spdk_lspci/spdk_lspci.o 00:03:39.341 TEST_HEADER include/spdk/accel_module.h 00:03:39.341 TEST_HEADER include/spdk/accel.h 00:03:39.341 CC test/rpc_client/rpc_client_test.o 00:03:39.341 TEST_HEADER include/spdk/assert.h 00:03:39.341 TEST_HEADER include/spdk/base64.h 00:03:39.341 TEST_HEADER include/spdk/barrier.h 00:03:39.341 TEST_HEADER include/spdk/bdev.h 00:03:39.341 TEST_HEADER include/spdk/bdev_module.h 00:03:39.341 CC app/spdk_top/spdk_top.o 00:03:39.341 CC app/trace_record/trace_record.o 00:03:39.341 CC app/spdk_nvme_identify/identify.o 00:03:39.341 TEST_HEADER include/spdk/bit_pool.h 00:03:39.341 TEST_HEADER include/spdk/bit_array.h 00:03:39.341 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.341 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.341 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.341 TEST_HEADER include/spdk/blobfs.h 00:03:39.341 TEST_HEADER include/spdk/blob.h 00:03:39.341 TEST_HEADER include/spdk/conf.h 00:03:39.341 TEST_HEADER include/spdk/config.h 00:03:39.341 TEST_HEADER include/spdk/cpuset.h 00:03:39.341 TEST_HEADER include/spdk/crc16.h 00:03:39.341 TEST_HEADER include/spdk/dif.h 00:03:39.341 TEST_HEADER include/spdk/crc64.h 00:03:39.341 TEST_HEADER include/spdk/crc32.h 00:03:39.341 TEST_HEADER include/spdk/dma.h 00:03:39.341 TEST_HEADER include/spdk/endian.h 00:03:39.341 TEST_HEADER include/spdk/env.h 00:03:39.341 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.341 TEST_HEADER include/spdk/event.h 00:03:39.341 TEST_HEADER include/spdk/fd_group.h 00:03:39.341 TEST_HEADER include/spdk/fd.h 00:03:39.341 TEST_HEADER include/spdk/file.h 00:03:39.341 TEST_HEADER include/spdk/fsdev.h 00:03:39.341 TEST_HEADER include/spdk/ftl.h 00:03:39.341 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.341 TEST_HEADER include/spdk/hexlify.h 00:03:39.341 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.341 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.341 TEST_HEADER include/spdk/idxd.h 00:03:39.341 TEST_HEADER include/spdk/histogram_data.h 00:03:39.341 TEST_HEADER include/spdk/init.h 00:03:39.341 TEST_HEADER include/spdk/ioat.h 00:03:39.341 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.341 TEST_HEADER include/spdk/json.h 00:03:39.341 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.341 TEST_HEADER include/spdk/keyring.h 00:03:39.341 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.341 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.341 TEST_HEADER include/spdk/keyring_module.h 00:03:39.341 TEST_HEADER include/spdk/log.h 00:03:39.341 TEST_HEADER include/spdk/md5.h 00:03:39.341 TEST_HEADER include/spdk/likely.h 00:03:39.341 TEST_HEADER include/spdk/lvol.h 00:03:39.341 TEST_HEADER include/spdk/memory.h 00:03:39.341 CC app/spdk_dd/spdk_dd.o 00:03:39.341 TEST_HEADER include/spdk/net.h 00:03:39.341 TEST_HEADER include/spdk/nbd.h 00:03:39.341 TEST_HEADER include/spdk/mmio.h 00:03:39.341 TEST_HEADER include/spdk/notify.h 00:03:39.341 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.341 TEST_HEADER include/spdk/nvme.h 00:03:39.341 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.341 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.341 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.341 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.341 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.341 TEST_HEADER include/spdk/nvmf.h 00:03:39.341 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.341 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.341 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.341 CC app/nvmf_tgt/nvmf_main.o 00:03:39.341 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.341 TEST_HEADER include/spdk/opal.h 00:03:39.341 TEST_HEADER include/spdk/opal_spec.h 00:03:39.341 TEST_HEADER include/spdk/pipe.h 00:03:39.341 TEST_HEADER include/spdk/pci_ids.h 00:03:39.341 TEST_HEADER include/spdk/reduce.h 00:03:39.341 TEST_HEADER include/spdk/queue.h 00:03:39.341 TEST_HEADER include/spdk/rpc.h 00:03:39.341 TEST_HEADER include/spdk/scsi.h 00:03:39.341 TEST_HEADER include/spdk/scheduler.h 00:03:39.341 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.341 TEST_HEADER include/spdk/stdinc.h 00:03:39.341 TEST_HEADER include/spdk/string.h 00:03:39.341 TEST_HEADER include/spdk/sock.h 00:03:39.341 TEST_HEADER include/spdk/thread.h 00:03:39.341 TEST_HEADER include/spdk/trace.h 00:03:39.341 TEST_HEADER include/spdk/tree.h 00:03:39.341 TEST_HEADER include/spdk/trace_parser.h 00:03:39.341 TEST_HEADER include/spdk/ublk.h 00:03:39.341 TEST_HEADER include/spdk/util.h 00:03:39.341 TEST_HEADER include/spdk/uuid.h 00:03:39.341 TEST_HEADER include/spdk/version.h 00:03:39.341 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.341 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.341 TEST_HEADER include/spdk/vhost.h 00:03:39.341 TEST_HEADER include/spdk/vmd.h 00:03:39.341 TEST_HEADER include/spdk/xor.h 00:03:39.341 TEST_HEADER include/spdk/zipf.h 00:03:39.341 CXX test/cpp_headers/accel.o 00:03:39.341 CXX test/cpp_headers/accel_module.o 00:03:39.341 CXX test/cpp_headers/barrier.o 00:03:39.341 CXX test/cpp_headers/assert.o 00:03:39.341 CXX test/cpp_headers/base64.o 00:03:39.341 CXX test/cpp_headers/bdev.o 00:03:39.341 CXX test/cpp_headers/bdev_zone.o 00:03:39.341 CXX test/cpp_headers/bdev_module.o 00:03:39.341 CXX test/cpp_headers/bit_pool.o 00:03:39.341 CXX test/cpp_headers/bit_array.o 00:03:39.341 CXX test/cpp_headers/blob_bdev.o 00:03:39.341 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.341 CXX test/cpp_headers/blobfs.o 00:03:39.609 CXX test/cpp_headers/blob.o 00:03:39.609 CXX test/cpp_headers/config.o 00:03:39.609 CXX test/cpp_headers/conf.o 00:03:39.609 CC app/spdk_tgt/spdk_tgt.o 00:03:39.609 CXX test/cpp_headers/crc16.o 00:03:39.609 CXX test/cpp_headers/crc32.o 00:03:39.609 CXX test/cpp_headers/cpuset.o 00:03:39.609 CXX test/cpp_headers/dif.o 00:03:39.609 CXX test/cpp_headers/dma.o 00:03:39.609 CXX test/cpp_headers/crc64.o 00:03:39.609 CXX test/cpp_headers/endian.o 00:03:39.609 CXX test/cpp_headers/env_dpdk.o 00:03:39.609 CXX test/cpp_headers/event.o 00:03:39.609 CXX test/cpp_headers/fd_group.o 00:03:39.609 CXX test/cpp_headers/env.o 00:03:39.609 CXX test/cpp_headers/fd.o 00:03:39.609 CXX test/cpp_headers/file.o 00:03:39.609 CXX test/cpp_headers/fsdev.o 00:03:39.609 CXX test/cpp_headers/fsdev_module.o 00:03:39.609 CXX test/cpp_headers/ftl.o 00:03:39.609 CXX test/cpp_headers/gpt_spec.o 00:03:39.609 CXX test/cpp_headers/histogram_data.o 00:03:39.609 CXX test/cpp_headers/hexlify.o 00:03:39.609 CXX test/cpp_headers/idxd_spec.o 00:03:39.609 CXX test/cpp_headers/idxd.o 00:03:39.609 CXX test/cpp_headers/init.o 00:03:39.609 CXX test/cpp_headers/ioat.o 00:03:39.609 CXX test/cpp_headers/ioat_spec.o 00:03:39.609 CXX test/cpp_headers/iscsi_spec.o 00:03:39.609 CXX test/cpp_headers/json.o 00:03:39.609 CXX test/cpp_headers/jsonrpc.o 00:03:39.609 CXX test/cpp_headers/keyring.o 00:03:39.609 CXX test/cpp_headers/log.o 00:03:39.609 CXX test/cpp_headers/keyring_module.o 00:03:39.609 CXX test/cpp_headers/likely.o 00:03:39.609 CXX test/cpp_headers/memory.o 00:03:39.609 CXX test/cpp_headers/md5.o 00:03:39.609 CXX test/cpp_headers/lvol.o 00:03:39.609 CXX test/cpp_headers/mmio.o 00:03:39.609 CXX test/cpp_headers/nbd.o 00:03:39.609 CXX test/cpp_headers/notify.o 00:03:39.609 CXX test/cpp_headers/net.o 00:03:39.609 CXX test/cpp_headers/nvme_intel.o 00:03:39.609 CXX test/cpp_headers/nvme.o 00:03:39.609 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.609 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.609 CXX test/cpp_headers/nvme_spec.o 00:03:39.609 CXX test/cpp_headers/nvme_zns.o 00:03:39.609 CXX test/cpp_headers/nvmf.o 00:03:39.609 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.609 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.609 CXX test/cpp_headers/nvmf_spec.o 00:03:39.609 CXX test/cpp_headers/nvmf_transport.o 00:03:39.609 CXX test/cpp_headers/opal.o 00:03:39.609 CXX test/cpp_headers/opal_spec.o 00:03:39.609 CC test/thread/poller_perf/poller_perf.o 00:03:39.609 CXX test/cpp_headers/pci_ids.o 00:03:39.609 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.609 CC test/app/jsoncat/jsoncat.o 00:03:39.609 CC test/app/stub/stub.o 00:03:39.609 CC test/env/vtophys/vtophys.o 00:03:39.609 CC examples/util/zipf/zipf.o 00:03:39.610 CC test/env/pci/pci_ut.o 00:03:39.610 CC app/fio/nvme/fio_plugin.o 00:03:39.610 CC test/env/memory/memory_ut.o 00:03:39.610 CC examples/ioat/verify/verify.o 00:03:39.610 CC test/app/histogram_perf/histogram_perf.o 00:03:39.610 CC examples/ioat/perf/perf.o 00:03:39.610 CC test/app/bdev_svc/bdev_svc.o 00:03:39.610 CC test/dma/test_dma/test_dma.o 00:03:39.877 CC app/fio/bdev/fio_plugin.o 00:03:39.877 LINK spdk_lspci 00:03:39.877 LINK rpc_client_test 00:03:39.877 LINK interrupt_tgt 00:03:40.138 LINK spdk_nvme_discover 00:03:40.138 LINK iscsi_tgt 00:03:40.138 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.138 LINK nvmf_tgt 00:03:40.138 LINK poller_perf 00:03:40.138 LINK jsoncat 00:03:40.138 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.138 CXX test/cpp_headers/pipe.o 00:03:40.138 LINK env_dpdk_post_init 00:03:40.138 LINK vtophys 00:03:40.138 LINK histogram_perf 00:03:40.138 CXX test/cpp_headers/queue.o 00:03:40.138 CXX test/cpp_headers/reduce.o 00:03:40.138 LINK zipf 00:03:40.138 CXX test/cpp_headers/rpc.o 00:03:40.138 CXX test/cpp_headers/scheduler.o 00:03:40.138 CXX test/cpp_headers/scsi.o 00:03:40.138 CXX test/cpp_headers/sock.o 00:03:40.138 CXX test/cpp_headers/scsi_spec.o 00:03:40.138 CXX test/cpp_headers/stdinc.o 00:03:40.138 CXX test/cpp_headers/string.o 00:03:40.138 CXX test/cpp_headers/thread.o 00:03:40.138 LINK stub 00:03:40.138 CXX test/cpp_headers/trace_parser.o 00:03:40.138 CXX test/cpp_headers/trace.o 00:03:40.138 CXX test/cpp_headers/tree.o 00:03:40.138 LINK spdk_trace_record 00:03:40.138 CXX test/cpp_headers/ublk.o 00:03:40.138 CXX test/cpp_headers/util.o 00:03:40.138 CXX test/cpp_headers/uuid.o 00:03:40.138 CXX test/cpp_headers/version.o 00:03:40.138 CXX test/cpp_headers/vfio_user_pci.o 00:03:40.138 CXX test/cpp_headers/vhost.o 00:03:40.138 CXX test/cpp_headers/vfio_user_spec.o 00:03:40.138 CXX test/cpp_headers/vmd.o 00:03:40.138 CXX test/cpp_headers/xor.o 00:03:40.138 CXX test/cpp_headers/zipf.o 00:03:40.138 LINK spdk_tgt 00:03:40.397 LINK bdev_svc 00:03:40.397 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.397 LINK ioat_perf 00:03:40.397 LINK verify 00:03:40.397 LINK spdk_trace 00:03:40.397 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.397 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.397 LINK spdk_dd 00:03:40.656 LINK pci_ut 00:03:40.656 CC test/event/reactor_perf/reactor_perf.o 00:03:40.656 LINK spdk_nvme_perf 00:03:40.656 CC test/event/app_repeat/app_repeat.o 00:03:40.656 LINK spdk_nvme 00:03:40.656 CC test/event/reactor/reactor.o 00:03:40.656 CC test/event/event_perf/event_perf.o 00:03:40.656 LINK test_dma 00:03:40.656 LINK spdk_bdev 00:03:40.656 CC test/event/scheduler/scheduler.o 00:03:40.656 CC examples/sock/hello_world/hello_sock.o 00:03:40.656 LINK spdk_nvme_identify 00:03:40.656 CC examples/idxd/perf/perf.o 00:03:40.656 CC examples/vmd/led/led.o 00:03:40.656 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.656 LINK nvme_fuzz 00:03:40.656 CC examples/thread/thread/thread_ex.o 00:03:40.915 LINK mem_callbacks 00:03:40.915 LINK vhost_fuzz 00:03:40.915 LINK reactor 00:03:40.915 LINK reactor_perf 00:03:40.915 LINK event_perf 00:03:40.915 LINK spdk_top 00:03:40.915 LINK app_repeat 00:03:40.915 CC app/vhost/vhost.o 00:03:40.915 LINK led 00:03:40.915 LINK lsvmd 00:03:40.915 LINK scheduler 00:03:40.915 LINK hello_sock 00:03:40.915 LINK thread 00:03:41.173 LINK idxd_perf 00:03:41.173 LINK vhost 00:03:41.173 CC test/nvme/sgl/sgl.o 00:03:41.173 CC test/nvme/simple_copy/simple_copy.o 00:03:41.173 LINK memory_ut 00:03:41.173 CC test/nvme/overhead/overhead.o 00:03:41.173 CC test/nvme/aer/aer.o 00:03:41.173 CC test/nvme/fdp/fdp.o 00:03:41.173 CC test/nvme/reset/reset.o 00:03:41.173 CC test/nvme/cuse/cuse.o 00:03:41.173 CC test/nvme/compliance/nvme_compliance.o 00:03:41.173 CC test/nvme/connect_stress/connect_stress.o 00:03:41.173 CC test/nvme/boot_partition/boot_partition.o 00:03:41.173 CC test/nvme/e2edp/nvme_dp.o 00:03:41.173 CC test/nvme/startup/startup.o 00:03:41.173 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.173 CC test/nvme/err_injection/err_injection.o 00:03:41.173 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.173 CC test/nvme/reserve/reserve.o 00:03:41.173 CC test/accel/dif/dif.o 00:03:41.431 CC test/blobfs/mkfs/mkfs.o 00:03:41.431 CC test/lvol/esnap/esnap.o 00:03:41.431 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:41.431 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:41.431 CC examples/nvme/arbitration/arbitration.o 00:03:41.431 LINK startup 00:03:41.431 CC examples/nvme/abort/abort.o 00:03:41.431 CC examples/nvme/reconnect/reconnect.o 00:03:41.431 CC examples/nvme/hotplug/hotplug.o 00:03:41.431 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.431 CC examples/nvme/hello_world/hello_world.o 00:03:41.431 LINK doorbell_aers 00:03:41.431 LINK err_injection 00:03:41.431 LINK boot_partition 00:03:41.431 LINK connect_stress 00:03:41.431 LINK fused_ordering 00:03:41.431 LINK simple_copy 00:03:41.431 LINK reserve 00:03:41.431 LINK sgl 00:03:41.431 LINK mkfs 00:03:41.688 LINK reset 00:03:41.688 CC examples/accel/perf/accel_perf.o 00:03:41.688 LINK aer 00:03:41.688 CC examples/blob/cli/blobcli.o 00:03:41.688 LINK overhead 00:03:41.688 CC examples/blob/hello_world/hello_blob.o 00:03:41.688 LINK nvme_dp 00:03:41.688 LINK fdp 00:03:41.688 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:41.688 LINK pmr_persistence 00:03:41.688 LINK nvme_compliance 00:03:41.688 LINK cmb_copy 00:03:41.688 LINK hotplug 00:03:41.688 LINK hello_world 00:03:41.688 LINK arbitration 00:03:41.947 LINK abort 00:03:41.947 LINK reconnect 00:03:41.947 LINK iscsi_fuzz 00:03:41.947 LINK hello_blob 00:03:41.947 LINK hello_fsdev 00:03:41.947 LINK dif 00:03:41.947 LINK nvme_manage 00:03:41.947 LINK accel_perf 00:03:41.947 LINK blobcli 00:03:42.512 LINK cuse 00:03:42.512 CC test/bdev/bdevio/bdevio.o 00:03:42.512 CC examples/bdev/hello_world/hello_bdev.o 00:03:42.512 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.771 LINK hello_bdev 00:03:42.771 LINK bdevio 00:03:43.030 LINK bdevperf 00:03:43.598 CC examples/nvmf/nvmf/nvmf.o 00:03:43.857 LINK nvmf 00:03:45.235 LINK esnap 00:03:45.235 00:03:45.235 real 0m55.616s 00:03:45.235 user 6m50.459s 00:03:45.235 sys 2m56.720s 00:03:45.235 16:09:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:45.235 16:09:33 make -- common/autotest_common.sh@10 -- $ set +x 00:03:45.235 ************************************ 00:03:45.235 END TEST make 00:03:45.235 ************************************ 00:03:45.235 16:09:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:45.235 16:09:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:45.235 16:09:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:45.235 16:09:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.236 16:09:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:45.236 16:09:33 -- pm/common@44 -- $ pid=674738 00:03:45.236 16:09:33 -- pm/common@50 -- $ kill -TERM 674738 00:03:45.236 16:09:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.236 16:09:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:45.236 16:09:33 -- pm/common@44 -- $ pid=674739 00:03:45.236 16:09:33 -- pm/common@50 -- $ kill -TERM 674739 00:03:45.236 16:09:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.236 16:09:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:45.236 16:09:33 -- pm/common@44 -- $ pid=674741 00:03:45.236 16:09:33 -- pm/common@50 -- $ kill -TERM 674741 00:03:45.236 16:09:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.236 16:09:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:45.236 16:09:33 -- pm/common@44 -- $ pid=674766 00:03:45.236 16:09:33 -- pm/common@50 -- $ sudo -E kill -TERM 674766 00:03:45.236 16:09:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:45.236 16:09:33 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.495 16:09:33 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:45.495 16:09:33 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:45.495 16:09:33 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:45.495 16:09:33 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:45.495 16:09:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.495 16:09:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.495 16:09:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.495 16:09:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.495 16:09:33 -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.495 16:09:33 -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.495 16:09:33 -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.495 16:09:33 -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.495 16:09:33 -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.495 16:09:33 -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.495 16:09:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.495 16:09:33 -- scripts/common.sh@344 -- # case "$op" in 00:03:45.495 16:09:33 -- scripts/common.sh@345 -- # : 1 00:03:45.495 16:09:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.495 16:09:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.495 16:09:33 -- scripts/common.sh@365 -- # decimal 1 00:03:45.495 16:09:33 -- scripts/common.sh@353 -- # local d=1 00:03:45.495 16:09:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.495 16:09:33 -- scripts/common.sh@355 -- # echo 1 00:03:45.495 16:09:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.495 16:09:33 -- scripts/common.sh@366 -- # decimal 2 00:03:45.495 16:09:33 -- scripts/common.sh@353 -- # local d=2 00:03:45.495 16:09:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.495 16:09:33 -- scripts/common.sh@355 -- # echo 2 00:03:45.495 16:09:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.495 16:09:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.495 16:09:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.495 16:09:33 -- scripts/common.sh@368 -- # return 0 00:03:45.495 16:09:33 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.496 16:09:33 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:45.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.496 --rc genhtml_branch_coverage=1 00:03:45.496 --rc genhtml_function_coverage=1 00:03:45.496 --rc genhtml_legend=1 00:03:45.496 --rc geninfo_all_blocks=1 00:03:45.496 --rc geninfo_unexecuted_blocks=1 00:03:45.496 00:03:45.496 ' 00:03:45.496 16:09:33 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:45.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.496 --rc genhtml_branch_coverage=1 00:03:45.496 --rc genhtml_function_coverage=1 00:03:45.496 --rc genhtml_legend=1 00:03:45.496 --rc geninfo_all_blocks=1 00:03:45.496 --rc geninfo_unexecuted_blocks=1 00:03:45.496 00:03:45.496 ' 00:03:45.496 16:09:33 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:45.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.496 --rc genhtml_branch_coverage=1 00:03:45.496 --rc genhtml_function_coverage=1 00:03:45.496 --rc genhtml_legend=1 00:03:45.496 --rc geninfo_all_blocks=1 00:03:45.496 --rc geninfo_unexecuted_blocks=1 00:03:45.496 00:03:45.496 ' 00:03:45.496 16:09:33 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:45.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.496 --rc genhtml_branch_coverage=1 00:03:45.496 --rc genhtml_function_coverage=1 00:03:45.496 --rc genhtml_legend=1 00:03:45.496 --rc geninfo_all_blocks=1 00:03:45.496 --rc geninfo_unexecuted_blocks=1 00:03:45.496 00:03:45.496 ' 00:03:45.496 16:09:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:45.496 16:09:33 -- nvmf/common.sh@7 -- # uname -s 00:03:45.496 16:09:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.496 16:09:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.496 16:09:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.496 16:09:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.496 16:09:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.496 16:09:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.496 16:09:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.496 16:09:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.496 16:09:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.496 16:09:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.496 16:09:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:45.496 16:09:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:45.496 16:09:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.496 16:09:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.496 16:09:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:45.496 16:09:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.496 16:09:33 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:45.496 16:09:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:45.496 16:09:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.496 16:09:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.496 16:09:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.496 16:09:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.496 16:09:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.496 16:09:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.496 16:09:33 -- paths/export.sh@5 -- # export PATH 00:03:45.496 16:09:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.496 16:09:33 -- nvmf/common.sh@51 -- # : 0 00:03:45.496 16:09:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:45.496 16:09:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:45.496 16:09:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.496 16:09:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.496 16:09:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.496 16:09:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:45.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:45.496 16:09:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:45.496 16:09:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:45.496 16:09:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:45.496 16:09:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.496 16:09:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.496 16:09:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.496 16:09:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.496 16:09:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:45.496 16:09:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.496 16:09:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:45.496 16:09:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.496 16:09:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.496 16:09:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.496 16:09:34 -- spdk/autotest.sh@48 -- # udevadm_pid=755198 00:03:45.496 16:09:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.496 16:09:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.496 16:09:34 -- pm/common@17 -- # local monitor 00:03:45.496 16:09:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.496 16:09:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.496 16:09:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.496 16:09:34 -- pm/common@21 -- # date +%s 00:03:45.496 16:09:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.496 16:09:34 -- pm/common@21 -- # date +%s 00:03:45.496 16:09:34 -- pm/common@25 -- # sleep 1 00:03:45.496 16:09:34 -- pm/common@21 -- # date +%s 00:03:45.496 16:09:34 -- pm/common@21 -- # date +%s 00:03:45.496 16:09:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734361774 00:03:45.496 16:09:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734361774 00:03:45.496 16:09:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734361774 00:03:45.496 16:09:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734361774 00:03:45.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734361774_collect-cpu-load.pm.log 00:03:45.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734361774_collect-vmstat.pm.log 00:03:45.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734361774_collect-cpu-temp.pm.log 00:03:45.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734361774_collect-bmc-pm.bmc.pm.log 00:03:46.433 16:09:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:46.433 16:09:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:46.433 16:09:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.433 16:09:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.433 16:09:35 -- spdk/autotest.sh@59 -- # create_test_list 00:03:46.433 16:09:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:46.433 16:09:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.693 16:09:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:46.693 16:09:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.693 16:09:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.693 16:09:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:46.693 16:09:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.693 16:09:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:46.693 16:09:35 -- common/autotest_common.sh@1457 -- # uname 00:03:46.693 16:09:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:46.693 16:09:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:46.693 16:09:35 -- common/autotest_common.sh@1477 -- # uname 00:03:46.693 16:09:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:46.693 16:09:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:46.693 16:09:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:46.693 lcov: LCOV version 1.15 00:03:46.693 16:09:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:04.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.351 16:09:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.351 16:09:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.351 16:09:59 -- common/autotest_common.sh@10 -- # set +x 00:04:11.351 16:09:59 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.351 16:09:59 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.886 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:14.145 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:14.145 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:14.405 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:14.405 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:14.405 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:14.405 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:14.405 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:14.405 16:10:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:14.405 16:10:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:14.405 16:10:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:14.405 16:10:02 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:14.405 16:10:02 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:14.405 16:10:02 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:14.405 16:10:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:14.405 16:10:02 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:14.405 16:10:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:14.405 16:10:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:14.405 16:10:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:14.405 16:10:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.405 16:10:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.405 16:10:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:14.405 16:10:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.405 16:10:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.405 16:10:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:14.405 16:10:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:14.405 16:10:02 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.405 No valid GPT data, bailing 00:04:14.405 16:10:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.405 16:10:02 -- scripts/common.sh@394 -- # pt= 00:04:14.405 16:10:02 -- scripts/common.sh@395 -- # return 1 00:04:14.405 16:10:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.405 1+0 records in 00:04:14.405 1+0 records out 00:04:14.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511932 s, 205 MB/s 00:04:14.405 16:10:02 -- spdk/autotest.sh@105 -- # sync 00:04:14.405 16:10:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.405 16:10:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.405 16:10:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.976 16:10:08 -- spdk/autotest.sh@111 -- # uname -s 00:04:20.976 16:10:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:20.976 16:10:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:20.976 16:10:08 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:22.881 Hugepages 00:04:22.881 node hugesize free / total 00:04:22.881 node0 1048576kB 0 / 0 00:04:22.881 node0 2048kB 0 / 0 00:04:22.881 node1 1048576kB 0 / 0 00:04:22.881 node1 2048kB 0 / 0 00:04:22.881 00:04:22.881 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.881 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:22.881 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:22.881 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:22.881 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:22.881 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:22.881 16:10:11 -- spdk/autotest.sh@117 -- # uname -s 00:04:22.881 16:10:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:22.881 16:10:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:22.881 16:10:11 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.172 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.172 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.739 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.739 16:10:15 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:27.674 16:10:16 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:27.674 16:10:16 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:27.674 16:10:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.674 16:10:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:27.674 16:10:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:27.674 16:10:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:27.674 16:10:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.674 16:10:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.674 16:10:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:27.675 16:10:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:27.675 16:10:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:27.675 16:10:16 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.965 Waiting for block devices as requested 00:04:30.965 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:30.965 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:30.965 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:30.965 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:30.965 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:30.965 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:30.965 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:31.224 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:31.224 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:31.224 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:31.483 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:31.483 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:31.483 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:31.743 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:31.743 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:31.743 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:31.743 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:32.002 16:10:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:32.002 16:10:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:32.002 16:10:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:32.002 16:10:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:32.002 16:10:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:32.002 16:10:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:32.002 16:10:20 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:32.002 16:10:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:32.002 16:10:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:32.002 16:10:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:32.002 16:10:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:32.002 16:10:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:32.002 16:10:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:32.002 16:10:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:32.002 16:10:20 -- common/autotest_common.sh@1543 -- # continue 00:04:32.002 16:10:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:32.002 16:10:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.002 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.002 16:10:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:32.002 16:10:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.002 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.002 16:10:20 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.293 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:35.293 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:35.862 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.862 16:10:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:35.862 16:10:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.862 16:10:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.862 16:10:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:35.862 16:10:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:35.862 16:10:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.862 16:10:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:35.862 16:10:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:35.862 16:10:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:35.862 16:10:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:35.862 16:10:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:35.862 16:10:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.862 16:10:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.862 16:10:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.862 16:10:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.862 16:10:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.862 16:10:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:35.862 16:10:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:35.862 16:10:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.862 16:10:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:36.121 16:10:24 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:36.121 16:10:24 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:36.121 16:10:24 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:36.121 16:10:24 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:36.121 16:10:24 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:36.121 16:10:24 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:36.121 16:10:24 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=769371 00:04:36.121 16:10:24 -- common/autotest_common.sh@1585 -- # waitforlisten 769371 00:04:36.121 16:10:24 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.121 16:10:24 -- common/autotest_common.sh@835 -- # '[' -z 769371 ']' 00:04:36.121 16:10:24 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.121 16:10:24 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.121 16:10:24 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.121 16:10:24 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.121 16:10:24 -- common/autotest_common.sh@10 -- # set +x 00:04:36.121 [2024-12-16 16:10:24.530992] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:36.121 [2024-12-16 16:10:24.531046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769371 ] 00:04:36.121 [2024-12-16 16:10:24.606061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.121 [2024-12-16 16:10:24.629212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.380 16:10:24 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.380 16:10:24 -- common/autotest_common.sh@868 -- # return 0 00:04:36.380 16:10:24 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:36.380 16:10:24 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:36.380 16:10:24 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:39.673 nvme0n1 00:04:39.673 16:10:27 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:39.673 [2024-12-16 16:10:28.015158] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:39.673 [2024-12-16 16:10:28.015187] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:39.673 request: 00:04:39.673 { 00:04:39.673 "nvme_ctrlr_name": "nvme0", 00:04:39.673 "password": "test", 00:04:39.673 "method": "bdev_nvme_opal_revert", 00:04:39.673 "req_id": 1 00:04:39.673 } 00:04:39.673 Got JSON-RPC error response 00:04:39.673 response: 00:04:39.673 { 00:04:39.673 "code": -32603, 00:04:39.673 "message": "Internal error" 00:04:39.673 } 00:04:39.673 16:10:28 -- common/autotest_common.sh@1591 -- # true 00:04:39.673 16:10:28 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:39.673 16:10:28 -- common/autotest_common.sh@1595 -- # killprocess 769371 00:04:39.673 16:10:28 -- common/autotest_common.sh@954 -- # '[' -z 769371 ']' 00:04:39.673 16:10:28 -- common/autotest_common.sh@958 -- # kill -0 769371 00:04:39.673 16:10:28 -- common/autotest_common.sh@959 -- # uname 00:04:39.673 16:10:28 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.673 16:10:28 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769371 00:04:39.673 16:10:28 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.673 16:10:28 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.673 16:10:28 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769371' 00:04:39.673 killing process with pid 769371 00:04:39.673 16:10:28 -- common/autotest_common.sh@973 -- # kill 769371 00:04:39.673 16:10:28 -- common/autotest_common.sh@978 -- # wait 769371 00:04:41.577 16:10:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:41.577 16:10:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:41.577 16:10:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.577 16:10:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.577 16:10:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:41.577 16:10:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.577 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:41.577 16:10:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:41.577 16:10:29 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.577 16:10:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.577 16:10:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.577 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:41.577 ************************************ 00:04:41.577 START TEST env 00:04:41.577 ************************************ 00:04:41.577 16:10:29 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.577 * Looking for test storage... 00:04:41.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.577 16:10:29 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.577 16:10:29 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.577 16:10:29 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.577 16:10:29 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.577 16:10:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.577 16:10:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.577 16:10:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.577 16:10:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.577 16:10:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.577 16:10:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.577 16:10:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.577 16:10:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.577 16:10:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.577 16:10:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.577 16:10:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.577 16:10:29 env -- scripts/common.sh@344 -- # case "$op" in 00:04:41.577 16:10:29 env -- scripts/common.sh@345 -- # : 1 00:04:41.577 16:10:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.577 16:10:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.577 16:10:29 env -- scripts/common.sh@365 -- # decimal 1 00:04:41.577 16:10:29 env -- scripts/common.sh@353 -- # local d=1 00:04:41.577 16:10:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.577 16:10:29 env -- scripts/common.sh@355 -- # echo 1 00:04:41.577 16:10:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.577 16:10:29 env -- scripts/common.sh@366 -- # decimal 2 00:04:41.577 16:10:29 env -- scripts/common.sh@353 -- # local d=2 00:04:41.577 16:10:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.578 16:10:29 env -- scripts/common.sh@355 -- # echo 2 00:04:41.578 16:10:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.578 16:10:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.578 16:10:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.578 16:10:29 env -- scripts/common.sh@368 -- # return 0 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.578 --rc genhtml_branch_coverage=1 00:04:41.578 --rc genhtml_function_coverage=1 00:04:41.578 --rc genhtml_legend=1 00:04:41.578 --rc geninfo_all_blocks=1 00:04:41.578 --rc geninfo_unexecuted_blocks=1 00:04:41.578 00:04:41.578 ' 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.578 --rc genhtml_branch_coverage=1 00:04:41.578 --rc genhtml_function_coverage=1 00:04:41.578 --rc genhtml_legend=1 00:04:41.578 --rc geninfo_all_blocks=1 00:04:41.578 --rc geninfo_unexecuted_blocks=1 00:04:41.578 00:04:41.578 ' 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.578 --rc genhtml_branch_coverage=1 00:04:41.578 --rc genhtml_function_coverage=1 00:04:41.578 --rc genhtml_legend=1 00:04:41.578 --rc geninfo_all_blocks=1 00:04:41.578 --rc geninfo_unexecuted_blocks=1 00:04:41.578 00:04:41.578 ' 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.578 --rc genhtml_branch_coverage=1 00:04:41.578 --rc genhtml_function_coverage=1 00:04:41.578 --rc genhtml_legend=1 00:04:41.578 --rc geninfo_all_blocks=1 00:04:41.578 --rc geninfo_unexecuted_blocks=1 00:04:41.578 00:04:41.578 ' 00:04:41.578 16:10:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.578 16:10:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.578 16:10:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.578 ************************************ 00:04:41.578 START TEST env_memory 00:04:41.578 ************************************ 00:04:41.578 16:10:29 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.578 00:04:41.578 00:04:41.578 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.578 http://cunit.sourceforge.net/ 00:04:41.578 00:04:41.578 00:04:41.578 Suite: memory 00:04:41.578 Test: alloc and free memory map ...[2024-12-16 16:10:29.995016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.578 passed 00:04:41.578 Test: mem map translation ...[2024-12-16 16:10:30.015487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.578 [2024-12-16 16:10:30.015509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.578 [2024-12-16 16:10:30.015545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.578 [2024-12-16 16:10:30.015551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.578 passed 00:04:41.578 Test: mem map registration ...[2024-12-16 16:10:30.054770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:41.578 [2024-12-16 16:10:30.054789] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:41.578 passed 00:04:41.578 Test: mem map adjacent registrations ...passed 00:04:41.578 00:04:41.578 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.578 suites 1 1 n/a 0 0 00:04:41.578 tests 4 4 4 0 0 00:04:41.578 asserts 152 152 152 0 n/a 00:04:41.578 00:04:41.578 Elapsed time = 0.142 seconds 00:04:41.578 00:04:41.578 real 0m0.156s 00:04:41.578 user 0m0.148s 00:04:41.578 sys 0m0.007s 00:04:41.578 16:10:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.578 16:10:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:41.578 ************************************ 00:04:41.578 END TEST env_memory 00:04:41.578 ************************************ 00:04:41.578 16:10:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:41.578 16:10:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.578 16:10:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.578 16:10:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.578 ************************************ 00:04:41.578 START TEST env_vtophys 00:04:41.578 ************************************ 00:04:41.578 16:10:30 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:41.838 EAL: lib.eal log level changed from notice to debug 00:04:41.838 EAL: Detected lcore 0 as core 0 on socket 0 00:04:41.838 EAL: Detected lcore 1 as core 1 on socket 0 00:04:41.838 EAL: Detected lcore 2 as core 2 on socket 0 00:04:41.838 EAL: Detected lcore 3 as core 3 on socket 0 00:04:41.838 EAL: Detected lcore 4 as core 4 on socket 0 00:04:41.838 EAL: Detected lcore 5 as core 5 on socket 0 00:04:41.838 EAL: Detected lcore 6 as core 6 on socket 0 00:04:41.838 EAL: Detected lcore 7 as core 8 on socket 0 00:04:41.838 EAL: Detected lcore 8 as core 9 on socket 0 00:04:41.838 EAL: Detected lcore 9 as core 10 on socket 0 00:04:41.838 EAL: Detected lcore 10 as core 11 on socket 0 00:04:41.838 EAL: Detected lcore 11 as core 12 on socket 0 00:04:41.838 EAL: Detected lcore 12 as core 13 on socket 0 00:04:41.838 EAL: Detected lcore 13 as core 16 on socket 0 00:04:41.838 EAL: Detected lcore 14 as core 17 on socket 0 00:04:41.838 EAL: Detected lcore 15 as core 18 on socket 0 00:04:41.838 EAL: Detected lcore 16 as core 19 on socket 0 00:04:41.838 EAL: Detected lcore 17 as core 20 on socket 0 00:04:41.838 EAL: Detected lcore 18 as core 21 on socket 0 00:04:41.838 EAL: Detected lcore 19 as core 25 on socket 0 00:04:41.838 EAL: Detected lcore 20 as core 26 on socket 0 00:04:41.838 EAL: Detected lcore 21 as core 27 on socket 0 00:04:41.838 EAL: Detected lcore 22 as core 28 on socket 0 00:04:41.838 EAL: Detected lcore 23 as core 29 on socket 0 00:04:41.838 EAL: Detected lcore 24 as core 0 on socket 1 00:04:41.838 EAL: Detected lcore 25 as core 1 on socket 1 00:04:41.838 EAL: Detected lcore 26 as core 2 on socket 1 00:04:41.838 EAL: Detected lcore 27 as core 3 on socket 1 00:04:41.838 EAL: Detected lcore 28 as core 4 on socket 1 00:04:41.838 EAL: Detected lcore 29 as core 5 on socket 1 00:04:41.838 EAL: Detected lcore 30 as core 6 on socket 1 00:04:41.838 EAL: Detected lcore 31 as core 8 on socket 1 00:04:41.838 EAL: Detected lcore 32 as core 9 on socket 1 00:04:41.838 EAL: Detected lcore 33 as core 10 on socket 1 00:04:41.838 EAL: Detected lcore 34 as core 11 on socket 1 00:04:41.838 EAL: Detected lcore 35 as core 12 on socket 1 00:04:41.838 EAL: Detected lcore 36 as core 13 on socket 1 00:04:41.838 EAL: Detected lcore 37 as core 16 on socket 1 00:04:41.838 EAL: Detected lcore 38 as core 17 on socket 1 00:04:41.838 EAL: Detected lcore 39 as core 18 on socket 1 00:04:41.838 EAL: Detected lcore 40 as core 19 on socket 1 00:04:41.838 EAL: Detected lcore 41 as core 20 on socket 1 00:04:41.838 EAL: Detected lcore 42 as core 21 on socket 1 00:04:41.838 EAL: Detected lcore 43 as core 25 on socket 1 00:04:41.838 EAL: Detected lcore 44 as core 26 on socket 1 00:04:41.839 EAL: Detected lcore 45 as core 27 on socket 1 00:04:41.839 EAL: Detected lcore 46 as core 28 on socket 1 00:04:41.839 EAL: Detected lcore 47 as core 29 on socket 1 00:04:41.839 EAL: Detected lcore 48 as core 0 on socket 0 00:04:41.839 EAL: Detected lcore 49 as core 1 on socket 0 00:04:41.839 EAL: Detected lcore 50 as core 2 on socket 0 00:04:41.839 EAL: Detected lcore 51 as core 3 on socket 0 00:04:41.839 EAL: Detected lcore 52 as core 4 on socket 0 00:04:41.839 EAL: Detected lcore 53 as core 5 on socket 0 00:04:41.839 EAL: Detected lcore 54 as core 6 on socket 0 00:04:41.839 EAL: Detected lcore 55 as core 8 on socket 0 00:04:41.839 EAL: Detected lcore 56 as core 9 on socket 0 00:04:41.839 EAL: Detected lcore 57 as core 10 on socket 0 00:04:41.839 EAL: Detected lcore 58 as core 11 on socket 0 00:04:41.839 EAL: Detected lcore 59 as core 12 on socket 0 00:04:41.839 EAL: Detected lcore 60 as core 13 on socket 0 00:04:41.839 EAL: Detected lcore 61 as core 16 on socket 0 00:04:41.839 EAL: Detected lcore 62 as core 17 on socket 0 00:04:41.839 EAL: Detected lcore 63 as core 18 on socket 0 00:04:41.839 EAL: Detected lcore 64 as core 19 on socket 0 00:04:41.839 EAL: Detected lcore 65 as core 20 on socket 0 00:04:41.839 EAL: Detected lcore 66 as core 21 on socket 0 00:04:41.839 EAL: Detected lcore 67 as core 25 on socket 0 00:04:41.839 EAL: Detected lcore 68 as core 26 on socket 0 00:04:41.839 EAL: Detected lcore 69 as core 27 on socket 0 00:04:41.839 EAL: Detected lcore 70 as core 28 on socket 0 00:04:41.839 EAL: Detected lcore 71 as core 29 on socket 0 00:04:41.839 EAL: Detected lcore 72 as core 0 on socket 1 00:04:41.839 EAL: Detected lcore 73 as core 1 on socket 1 00:04:41.839 EAL: Detected lcore 74 as core 2 on socket 1 00:04:41.839 EAL: Detected lcore 75 as core 3 on socket 1 00:04:41.839 EAL: Detected lcore 76 as core 4 on socket 1 00:04:41.839 EAL: Detected lcore 77 as core 5 on socket 1 00:04:41.839 EAL: Detected lcore 78 as core 6 on socket 1 00:04:41.839 EAL: Detected lcore 79 as core 8 on socket 1 00:04:41.839 EAL: Detected lcore 80 as core 9 on socket 1 00:04:41.839 EAL: Detected lcore 81 as core 10 on socket 1 00:04:41.839 EAL: Detected lcore 82 as core 11 on socket 1 00:04:41.839 EAL: Detected lcore 83 as core 12 on socket 1 00:04:41.839 EAL: Detected lcore 84 as core 13 on socket 1 00:04:41.839 EAL: Detected lcore 85 as core 16 on socket 1 00:04:41.839 EAL: Detected lcore 86 as core 17 on socket 1 00:04:41.839 EAL: Detected lcore 87 as core 18 on socket 1 00:04:41.839 EAL: Detected lcore 88 as core 19 on socket 1 00:04:41.839 EAL: Detected lcore 89 as core 20 on socket 1 00:04:41.839 EAL: Detected lcore 90 as core 21 on socket 1 00:04:41.839 EAL: Detected lcore 91 as core 25 on socket 1 00:04:41.839 EAL: Detected lcore 92 as core 26 on socket 1 00:04:41.839 EAL: Detected lcore 93 as core 27 on socket 1 00:04:41.839 EAL: Detected lcore 94 as core 28 on socket 1 00:04:41.839 EAL: Detected lcore 95 as core 29 on socket 1 00:04:41.839 EAL: Maximum logical cores by configuration: 128 00:04:41.839 EAL: Detected CPU lcores: 96 00:04:41.839 EAL: Detected NUMA nodes: 2 00:04:41.839 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:41.839 EAL: Detected shared linkage of DPDK 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:41.839 EAL: Registered [vdev] bus. 00:04:41.839 EAL: bus.vdev log level changed from disabled to notice 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:41.839 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:41.839 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:41.839 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:41.839 EAL: No shared files mode enabled, IPC will be disabled 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: Bus pci wants IOVA as 'DC' 00:04:41.839 EAL: Bus vdev wants IOVA as 'DC' 00:04:41.839 EAL: Buses did not request a specific IOVA mode. 00:04:41.839 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:41.839 EAL: Selected IOVA mode 'VA' 00:04:41.839 EAL: Probing VFIO support... 00:04:41.839 EAL: IOMMU type 1 (Type 1) is supported 00:04:41.839 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:41.839 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:41.839 EAL: VFIO support initialized 00:04:41.839 EAL: Ask a virtual area of 0x2e000 bytes 00:04:41.839 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:41.839 EAL: Setting up physically contiguous memory... 00:04:41.839 EAL: Setting maximum number of open files to 524288 00:04:41.839 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:41.839 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:41.839 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:41.839 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:41.839 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.839 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:41.839 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.839 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.839 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:41.839 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:41.839 EAL: Hugepages will be freed exactly as allocated. 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: TSC frequency is ~2100000 KHz 00:04:41.839 EAL: Main lcore 0 is ready (tid=7fb6a8edba00;cpuset=[0]) 00:04:41.839 EAL: Trying to obtain current memory policy. 00:04:41.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.839 EAL: Restoring previous memory policy: 0 00:04:41.839 EAL: request: mp_malloc_sync 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: Heap on socket 0 was expanded by 2MB 00:04:41.839 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:41.839 EAL: probe driver: 8086:37d2 net_i40e 00:04:41.839 EAL: Not managed by a supported kernel driver, skipped 00:04:41.839 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:41.839 EAL: probe driver: 8086:37d2 net_i40e 00:04:41.839 EAL: Not managed by a supported kernel driver, skipped 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:41.839 EAL: Mem event callback 'spdk:(nil)' registered 00:04:41.839 00:04:41.839 00:04:41.839 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.839 http://cunit.sourceforge.net/ 00:04:41.839 00:04:41.839 00:04:41.839 Suite: components_suite 00:04:41.839 Test: vtophys_malloc_test ...passed 00:04:41.839 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:41.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.839 EAL: Restoring previous memory policy: 4 00:04:41.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.839 EAL: request: mp_malloc_sync 00:04:41.839 EAL: No shared files mode enabled, IPC is disabled 00:04:41.839 EAL: Heap on socket 0 was expanded by 4MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 4MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 6MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 6MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 10MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 10MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 18MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 18MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 34MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 34MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 66MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 66MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 130MB 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was shrunk by 130MB 00:04:41.840 EAL: Trying to obtain current memory policy. 00:04:41.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.840 EAL: Restoring previous memory policy: 4 00:04:41.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.840 EAL: request: mp_malloc_sync 00:04:41.840 EAL: No shared files mode enabled, IPC is disabled 00:04:41.840 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.099 EAL: request: mp_malloc_sync 00:04:42.099 EAL: No shared files mode enabled, IPC is disabled 00:04:42.099 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.099 EAL: Trying to obtain current memory policy. 00:04:42.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.099 EAL: Restoring previous memory policy: 4 00:04:42.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.099 EAL: request: mp_malloc_sync 00:04:42.099 EAL: No shared files mode enabled, IPC is disabled 00:04:42.099 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.358 EAL: request: mp_malloc_sync 00:04:42.358 EAL: No shared files mode enabled, IPC is disabled 00:04:42.358 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.358 EAL: Trying to obtain current memory policy. 00:04:42.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.358 EAL: Restoring previous memory policy: 4 00:04:42.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.358 EAL: request: mp_malloc_sync 00:04:42.358 EAL: No shared files mode enabled, IPC is disabled 00:04:42.358 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.617 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.876 EAL: request: mp_malloc_sync 00:04:42.876 EAL: No shared files mode enabled, IPC is disabled 00:04:42.876 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.876 passed 00:04:42.876 00:04:42.876 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.876 suites 1 1 n/a 0 0 00:04:42.876 tests 2 2 2 0 0 00:04:42.876 asserts 497 497 497 0 n/a 00:04:42.876 00:04:42.876 Elapsed time = 0.968 seconds 00:04:42.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.877 EAL: request: mp_malloc_sync 00:04:42.877 EAL: No shared files mode enabled, IPC is disabled 00:04:42.877 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.877 EAL: No shared files mode enabled, IPC is disabled 00:04:42.877 EAL: No shared files mode enabled, IPC is disabled 00:04:42.877 EAL: No shared files mode enabled, IPC is disabled 00:04:42.877 00:04:42.877 real 0m1.096s 00:04:42.877 user 0m0.634s 00:04:42.877 sys 0m0.436s 00:04:42.877 16:10:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.877 16:10:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:42.877 ************************************ 00:04:42.877 END TEST env_vtophys 00:04:42.877 ************************************ 00:04:42.877 16:10:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.877 16:10:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.877 16:10:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.877 16:10:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.877 ************************************ 00:04:42.877 START TEST env_pci 00:04:42.877 ************************************ 00:04:42.877 16:10:31 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.877 00:04:42.877 00:04:42.877 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.877 http://cunit.sourceforge.net/ 00:04:42.877 00:04:42.877 00:04:42.877 Suite: pci 00:04:42.877 Test: pci_hook ...[2024-12-16 16:10:31.363591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 770656 has claimed it 00:04:42.877 EAL: Cannot find device (10000:00:01.0) 00:04:42.877 EAL: Failed to attach device on primary process 00:04:42.877 passed 00:04:42.877 00:04:42.877 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.877 suites 1 1 n/a 0 0 00:04:42.877 tests 1 1 1 0 0 00:04:42.877 asserts 25 25 25 0 n/a 00:04:42.877 00:04:42.877 Elapsed time = 0.026 seconds 00:04:42.877 00:04:42.877 real 0m0.045s 00:04:42.877 user 0m0.012s 00:04:42.877 sys 0m0.033s 00:04:42.877 16:10:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.877 16:10:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:42.877 ************************************ 00:04:42.877 END TEST env_pci 00:04:42.877 ************************************ 00:04:42.877 16:10:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.877 16:10:31 env -- env/env.sh@15 -- # uname 00:04:42.877 16:10:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.877 16:10:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.877 16:10:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.877 16:10:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:42.877 16:10:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.877 16:10:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.877 ************************************ 00:04:42.877 START TEST env_dpdk_post_init 00:04:42.877 ************************************ 00:04:42.877 16:10:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.136 EAL: Detected CPU lcores: 96 00:04:43.136 EAL: Detected NUMA nodes: 2 00:04:43.136 EAL: Detected shared linkage of DPDK 00:04:43.136 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.136 EAL: Selected IOVA mode 'VA' 00:04:43.136 EAL: VFIO support initialized 00:04:43.136 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.136 EAL: Using IOMMU type 1 (Type 1) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:43.136 EAL: Ignore mapping IO port bar(1) 00:04:43.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:44.073 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:44.073 EAL: Ignore mapping IO port bar(1) 00:04:44.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:47.360 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:47.360 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:47.360 Starting DPDK initialization... 00:04:47.360 Starting SPDK post initialization... 00:04:47.360 SPDK NVMe probe 00:04:47.360 Attaching to 0000:5e:00.0 00:04:47.360 Attached to 0000:5e:00.0 00:04:47.360 Cleaning up... 00:04:47.360 00:04:47.360 real 0m4.344s 00:04:47.360 user 0m3.240s 00:04:47.360 sys 0m0.174s 00:04:47.360 16:10:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.360 16:10:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.360 ************************************ 00:04:47.360 END TEST env_dpdk_post_init 00:04:47.360 ************************************ 00:04:47.360 16:10:35 env -- env/env.sh@26 -- # uname 00:04:47.360 16:10:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:47.360 16:10:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.360 16:10:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.360 16:10:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.360 16:10:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.360 ************************************ 00:04:47.360 START TEST env_mem_callbacks 00:04:47.360 ************************************ 00:04:47.361 16:10:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.361 EAL: Detected CPU lcores: 96 00:04:47.361 EAL: Detected NUMA nodes: 2 00:04:47.361 EAL: Detected shared linkage of DPDK 00:04:47.361 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.361 EAL: Selected IOVA mode 'VA' 00:04:47.361 EAL: VFIO support initialized 00:04:47.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.361 00:04:47.361 00:04:47.361 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.361 http://cunit.sourceforge.net/ 00:04:47.361 00:04:47.361 00:04:47.361 Suite: memory 00:04:47.361 Test: test ... 00:04:47.361 register 0x200000200000 2097152 00:04:47.361 malloc 3145728 00:04:47.361 register 0x200000400000 4194304 00:04:47.361 buf 0x200000500000 len 3145728 PASSED 00:04:47.361 malloc 64 00:04:47.361 buf 0x2000004fff40 len 64 PASSED 00:04:47.361 malloc 4194304 00:04:47.361 register 0x200000800000 6291456 00:04:47.361 buf 0x200000a00000 len 4194304 PASSED 00:04:47.361 free 0x200000500000 3145728 00:04:47.361 free 0x2000004fff40 64 00:04:47.361 unregister 0x200000400000 4194304 PASSED 00:04:47.361 free 0x200000a00000 4194304 00:04:47.361 unregister 0x200000800000 6291456 PASSED 00:04:47.361 malloc 8388608 00:04:47.361 register 0x200000400000 10485760 00:04:47.361 buf 0x200000600000 len 8388608 PASSED 00:04:47.361 free 0x200000600000 8388608 00:04:47.361 unregister 0x200000400000 10485760 PASSED 00:04:47.361 passed 00:04:47.361 00:04:47.361 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.361 suites 1 1 n/a 0 0 00:04:47.361 tests 1 1 1 0 0 00:04:47.361 asserts 15 15 15 0 n/a 00:04:47.361 00:04:47.361 Elapsed time = 0.008 seconds 00:04:47.361 00:04:47.361 real 0m0.060s 00:04:47.361 user 0m0.020s 00:04:47.361 sys 0m0.040s 00:04:47.361 16:10:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.361 16:10:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:47.361 ************************************ 00:04:47.361 END TEST env_mem_callbacks 00:04:47.361 ************************************ 00:04:47.620 00:04:47.620 real 0m6.250s 00:04:47.620 user 0m4.325s 00:04:47.620 sys 0m1.002s 00:04:47.620 16:10:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.620 16:10:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.620 ************************************ 00:04:47.620 END TEST env 00:04:47.620 ************************************ 00:04:47.620 16:10:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:47.620 16:10:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.620 16:10:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.620 16:10:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.620 ************************************ 00:04:47.620 START TEST rpc 00:04:47.620 ************************************ 00:04:47.620 16:10:36 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:47.620 * Looking for test storage... 00:04:47.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:47.620 16:10:36 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.620 16:10:36 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.620 16:10:36 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.620 16:10:36 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.620 16:10:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.620 16:10:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.620 16:10:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.620 16:10:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.620 16:10:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.620 16:10:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.620 16:10:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.620 16:10:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.620 16:10:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.620 16:10:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.620 16:10:36 rpc -- scripts/common.sh@345 -- # : 1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.620 16:10:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.620 16:10:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.620 16:10:36 rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.620 16:10:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.620 16:10:36 rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.620 16:10:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.620 16:10:36 rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.879 16:10:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.879 16:10:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.879 16:10:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.879 16:10:36 rpc -- scripts/common.sh@368 -- # return 0 00:04:47.879 16:10:36 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.879 16:10:36 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.879 --rc genhtml_branch_coverage=1 00:04:47.879 --rc genhtml_function_coverage=1 00:04:47.879 --rc genhtml_legend=1 00:04:47.879 --rc geninfo_all_blocks=1 00:04:47.879 --rc geninfo_unexecuted_blocks=1 00:04:47.879 00:04:47.879 ' 00:04:47.879 16:10:36 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.879 --rc genhtml_branch_coverage=1 00:04:47.879 --rc genhtml_function_coverage=1 00:04:47.879 --rc genhtml_legend=1 00:04:47.879 --rc geninfo_all_blocks=1 00:04:47.879 --rc geninfo_unexecuted_blocks=1 00:04:47.879 00:04:47.879 ' 00:04:47.879 16:10:36 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.879 --rc genhtml_branch_coverage=1 00:04:47.879 --rc genhtml_function_coverage=1 00:04:47.879 --rc genhtml_legend=1 00:04:47.879 --rc geninfo_all_blocks=1 00:04:47.879 --rc geninfo_unexecuted_blocks=1 00:04:47.879 00:04:47.879 ' 00:04:47.879 16:10:36 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.879 --rc genhtml_branch_coverage=1 00:04:47.879 --rc genhtml_function_coverage=1 00:04:47.879 --rc genhtml_legend=1 00:04:47.879 --rc geninfo_all_blocks=1 00:04:47.879 --rc geninfo_unexecuted_blocks=1 00:04:47.879 00:04:47.879 ' 00:04:47.879 16:10:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:47.879 16:10:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=771473 00:04:47.879 16:10:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.880 16:10:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 771473 00:04:47.880 16:10:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 771473 ']' 00:04:47.880 16:10:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.880 16:10:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.880 16:10:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.880 16:10:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.880 16:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.880 [2024-12-16 16:10:36.273953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:47.880 [2024-12-16 16:10:36.273996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771473 ] 00:04:47.880 [2024-12-16 16:10:36.349654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.880 [2024-12-16 16:10:36.371654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:47.880 [2024-12-16 16:10:36.371690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 771473' to capture a snapshot of events at runtime. 00:04:47.880 [2024-12-16 16:10:36.371697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:47.880 [2024-12-16 16:10:36.371703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:47.880 [2024-12-16 16:10:36.371709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid771473 for offline analysis/debug. 00:04:47.880 [2024-12-16 16:10:36.372231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.139 16:10:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.139 16:10:36 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.139 16:10:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.139 16:10:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.139 16:10:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:48.139 16:10:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:48.139 16:10:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.139 16:10:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.139 16:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.139 ************************************ 00:04:48.139 START TEST rpc_integrity 00:04:48.139 ************************************ 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.139 { 00:04:48.139 "name": "Malloc0", 00:04:48.139 "aliases": [ 00:04:48.139 "effd829e-5b82-4f39-a430-e1fc09e0ea26" 00:04:48.139 ], 00:04:48.139 "product_name": "Malloc disk", 00:04:48.139 "block_size": 512, 00:04:48.139 "num_blocks": 16384, 00:04:48.139 "uuid": "effd829e-5b82-4f39-a430-e1fc09e0ea26", 00:04:48.139 "assigned_rate_limits": { 00:04:48.139 "rw_ios_per_sec": 0, 00:04:48.139 "rw_mbytes_per_sec": 0, 00:04:48.139 "r_mbytes_per_sec": 0, 00:04:48.139 "w_mbytes_per_sec": 0 00:04:48.139 }, 00:04:48.139 "claimed": false, 00:04:48.139 "zoned": false, 00:04:48.139 "supported_io_types": { 00:04:48.139 "read": true, 00:04:48.139 "write": true, 00:04:48.139 "unmap": true, 00:04:48.139 "flush": true, 00:04:48.139 "reset": true, 00:04:48.139 "nvme_admin": false, 00:04:48.139 "nvme_io": false, 00:04:48.139 "nvme_io_md": false, 00:04:48.139 "write_zeroes": true, 00:04:48.139 "zcopy": true, 00:04:48.139 "get_zone_info": false, 00:04:48.139 "zone_management": false, 00:04:48.139 "zone_append": false, 00:04:48.139 "compare": false, 00:04:48.139 "compare_and_write": false, 00:04:48.139 "abort": true, 00:04:48.139 "seek_hole": false, 00:04:48.139 "seek_data": false, 00:04:48.139 "copy": true, 00:04:48.139 "nvme_iov_md": false 00:04:48.139 }, 00:04:48.139 "memory_domains": [ 00:04:48.139 { 00:04:48.139 "dma_device_id": "system", 00:04:48.139 "dma_device_type": 1 00:04:48.139 }, 00:04:48.139 { 00:04:48.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.139 "dma_device_type": 2 00:04:48.139 } 00:04:48.139 ], 00:04:48.139 "driver_specific": {} 00:04:48.139 } 00:04:48.139 ]' 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.139 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.139 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.398 [2024-12-16 16:10:36.750676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:48.398 [2024-12-16 16:10:36.750705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.398 [2024-12-16 16:10:36.750717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5fae0 00:04:48.398 [2024-12-16 16:10:36.750724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.398 [2024-12-16 16:10:36.751779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.398 [2024-12-16 16:10:36.751799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.398 Passthru0 00:04:48.398 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.398 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.398 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.398 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.398 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.398 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.398 { 00:04:48.398 "name": "Malloc0", 00:04:48.398 "aliases": [ 00:04:48.398 "effd829e-5b82-4f39-a430-e1fc09e0ea26" 00:04:48.398 ], 00:04:48.398 "product_name": "Malloc disk", 00:04:48.398 "block_size": 512, 00:04:48.398 "num_blocks": 16384, 00:04:48.398 "uuid": "effd829e-5b82-4f39-a430-e1fc09e0ea26", 00:04:48.398 "assigned_rate_limits": { 00:04:48.398 "rw_ios_per_sec": 0, 00:04:48.398 "rw_mbytes_per_sec": 0, 00:04:48.398 "r_mbytes_per_sec": 0, 00:04:48.398 "w_mbytes_per_sec": 0 00:04:48.398 }, 00:04:48.398 "claimed": true, 00:04:48.398 "claim_type": "exclusive_write", 00:04:48.398 "zoned": false, 00:04:48.398 "supported_io_types": { 00:04:48.398 "read": true, 00:04:48.398 "write": true, 00:04:48.398 "unmap": true, 00:04:48.398 "flush": true, 00:04:48.398 "reset": true, 00:04:48.399 "nvme_admin": false, 00:04:48.399 "nvme_io": false, 00:04:48.399 "nvme_io_md": false, 00:04:48.399 "write_zeroes": true, 00:04:48.399 "zcopy": true, 00:04:48.399 "get_zone_info": false, 00:04:48.399 "zone_management": false, 00:04:48.399 "zone_append": false, 00:04:48.399 "compare": false, 00:04:48.399 "compare_and_write": false, 00:04:48.399 "abort": true, 00:04:48.399 "seek_hole": false, 00:04:48.399 "seek_data": false, 00:04:48.399 "copy": true, 00:04:48.399 "nvme_iov_md": false 00:04:48.399 }, 00:04:48.399 "memory_domains": [ 00:04:48.399 { 00:04:48.399 "dma_device_id": "system", 00:04:48.399 "dma_device_type": 1 00:04:48.399 }, 00:04:48.399 { 00:04:48.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.399 "dma_device_type": 2 00:04:48.399 } 00:04:48.399 ], 00:04:48.399 "driver_specific": {} 00:04:48.399 }, 00:04:48.399 { 00:04:48.399 "name": "Passthru0", 00:04:48.399 "aliases": [ 00:04:48.399 "6a545056-84bc-5e63-8b4b-0705e791a69c" 00:04:48.399 ], 00:04:48.399 "product_name": "passthru", 00:04:48.399 "block_size": 512, 00:04:48.399 "num_blocks": 16384, 00:04:48.399 "uuid": "6a545056-84bc-5e63-8b4b-0705e791a69c", 00:04:48.399 "assigned_rate_limits": { 00:04:48.399 "rw_ios_per_sec": 0, 00:04:48.399 "rw_mbytes_per_sec": 0, 00:04:48.399 "r_mbytes_per_sec": 0, 00:04:48.399 "w_mbytes_per_sec": 0 00:04:48.399 }, 00:04:48.399 "claimed": false, 00:04:48.399 "zoned": false, 00:04:48.399 "supported_io_types": { 00:04:48.399 "read": true, 00:04:48.399 "write": true, 00:04:48.399 "unmap": true, 00:04:48.399 "flush": true, 00:04:48.399 "reset": true, 00:04:48.399 "nvme_admin": false, 00:04:48.399 "nvme_io": false, 00:04:48.399 "nvme_io_md": false, 00:04:48.399 "write_zeroes": true, 00:04:48.399 "zcopy": true, 00:04:48.399 "get_zone_info": false, 00:04:48.399 "zone_management": false, 00:04:48.399 "zone_append": false, 00:04:48.399 "compare": false, 00:04:48.399 "compare_and_write": false, 00:04:48.399 "abort": true, 00:04:48.399 "seek_hole": false, 00:04:48.399 "seek_data": false, 00:04:48.399 "copy": true, 00:04:48.399 "nvme_iov_md": false 00:04:48.399 }, 00:04:48.399 "memory_domains": [ 00:04:48.399 { 00:04:48.399 "dma_device_id": "system", 00:04:48.399 "dma_device_type": 1 00:04:48.399 }, 00:04:48.399 { 00:04:48.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.399 "dma_device_type": 2 00:04:48.399 } 00:04:48.399 ], 00:04:48.399 "driver_specific": { 00:04:48.399 "passthru": { 00:04:48.399 "name": "Passthru0", 00:04:48.399 "base_bdev_name": "Malloc0" 00:04:48.399 } 00:04:48.399 } 00:04:48.399 } 00:04:48.399 ]' 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:48.399 16:10:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.399 00:04:48.399 real 0m0.283s 00:04:48.399 user 0m0.171s 00:04:48.399 sys 0m0.043s 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.399 16:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 ************************************ 00:04:48.399 END TEST rpc_integrity 00:04:48.399 ************************************ 00:04:48.399 16:10:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:48.399 16:10:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.399 16:10:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.399 16:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 ************************************ 00:04:48.399 START TEST rpc_plugins 00:04:48.399 ************************************ 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:48.399 16:10:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.399 16:10:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:48.399 16:10:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.399 16:10:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.399 16:10:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:48.399 { 00:04:48.399 "name": "Malloc1", 00:04:48.399 "aliases": [ 00:04:48.399 "89120d52-77ab-49af-ac41-64ef40610c8b" 00:04:48.399 ], 00:04:48.399 "product_name": "Malloc disk", 00:04:48.399 "block_size": 4096, 00:04:48.399 "num_blocks": 256, 00:04:48.399 "uuid": "89120d52-77ab-49af-ac41-64ef40610c8b", 00:04:48.399 "assigned_rate_limits": { 00:04:48.399 "rw_ios_per_sec": 0, 00:04:48.399 "rw_mbytes_per_sec": 0, 00:04:48.399 "r_mbytes_per_sec": 0, 00:04:48.399 "w_mbytes_per_sec": 0 00:04:48.399 }, 00:04:48.399 "claimed": false, 00:04:48.399 "zoned": false, 00:04:48.399 "supported_io_types": { 00:04:48.399 "read": true, 00:04:48.399 "write": true, 00:04:48.399 "unmap": true, 00:04:48.399 "flush": true, 00:04:48.399 "reset": true, 00:04:48.399 "nvme_admin": false, 00:04:48.399 "nvme_io": false, 00:04:48.399 "nvme_io_md": false, 00:04:48.399 "write_zeroes": true, 00:04:48.399 "zcopy": true, 00:04:48.399 "get_zone_info": false, 00:04:48.399 "zone_management": false, 00:04:48.399 "zone_append": false, 00:04:48.399 "compare": false, 00:04:48.399 "compare_and_write": false, 00:04:48.399 "abort": true, 00:04:48.399 "seek_hole": false, 00:04:48.399 "seek_data": false, 00:04:48.399 "copy": true, 00:04:48.399 "nvme_iov_md": false 00:04:48.399 }, 00:04:48.399 "memory_domains": [ 00:04:48.399 { 00:04:48.399 "dma_device_id": "system", 00:04:48.399 "dma_device_type": 1 00:04:48.399 }, 00:04:48.399 { 00:04:48.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.399 "dma_device_type": 2 00:04:48.399 } 00:04:48.399 ], 00:04:48.399 "driver_specific": {} 00:04:48.399 } 00:04:48.399 ]' 00:04:48.399 16:10:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:48.658 16:10:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:48.658 16:10:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.658 16:10:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.658 16:10:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:48.658 16:10:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:48.658 16:10:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:48.658 00:04:48.658 real 0m0.142s 00:04:48.658 user 0m0.088s 00:04:48.658 sys 0m0.020s 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.658 16:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.658 ************************************ 00:04:48.658 END TEST rpc_plugins 00:04:48.658 ************************************ 00:04:48.658 16:10:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:48.658 16:10:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.658 16:10:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.658 16:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.658 ************************************ 00:04:48.658 START TEST rpc_trace_cmd_test 00:04:48.658 ************************************ 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.658 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:48.658 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid771473", 00:04:48.658 "tpoint_group_mask": "0x8", 00:04:48.658 "iscsi_conn": { 00:04:48.658 "mask": "0x2", 00:04:48.658 "tpoint_mask": "0x0" 00:04:48.658 }, 00:04:48.659 "scsi": { 00:04:48.659 "mask": "0x4", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "bdev": { 00:04:48.659 "mask": "0x8", 00:04:48.659 "tpoint_mask": "0xffffffffffffffff" 00:04:48.659 }, 00:04:48.659 "nvmf_rdma": { 00:04:48.659 "mask": "0x10", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "nvmf_tcp": { 00:04:48.659 "mask": "0x20", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "ftl": { 00:04:48.659 "mask": "0x40", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "blobfs": { 00:04:48.659 "mask": "0x80", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "dsa": { 00:04:48.659 "mask": "0x200", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "thread": { 00:04:48.659 "mask": "0x400", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "nvme_pcie": { 00:04:48.659 "mask": "0x800", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "iaa": { 00:04:48.659 "mask": "0x1000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "nvme_tcp": { 00:04:48.659 "mask": "0x2000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "bdev_nvme": { 00:04:48.659 "mask": "0x4000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "sock": { 00:04:48.659 "mask": "0x8000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "blob": { 00:04:48.659 "mask": "0x10000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "bdev_raid": { 00:04:48.659 "mask": "0x20000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 }, 00:04:48.659 "scheduler": { 00:04:48.659 "mask": "0x40000", 00:04:48.659 "tpoint_mask": "0x0" 00:04:48.659 } 00:04:48.659 }' 00:04:48.659 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:48.659 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:48.659 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:48.918 00:04:48.918 real 0m0.227s 00:04:48.918 user 0m0.196s 00:04:48.918 sys 0m0.022s 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.918 16:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:48.918 ************************************ 00:04:48.918 END TEST rpc_trace_cmd_test 00:04:48.918 ************************************ 00:04:48.918 16:10:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:48.918 16:10:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:48.918 16:10:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:48.918 16:10:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.918 16:10:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.918 16:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.918 ************************************ 00:04:48.918 START TEST rpc_daemon_integrity 00:04:48.918 ************************************ 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.918 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.177 { 00:04:49.177 "name": "Malloc2", 00:04:49.177 "aliases": [ 00:04:49.177 "0a80de7e-b432-480d-a549-9dfcbbfb7104" 00:04:49.177 ], 00:04:49.177 "product_name": "Malloc disk", 00:04:49.177 "block_size": 512, 00:04:49.177 "num_blocks": 16384, 00:04:49.177 "uuid": "0a80de7e-b432-480d-a549-9dfcbbfb7104", 00:04:49.177 "assigned_rate_limits": { 00:04:49.177 "rw_ios_per_sec": 0, 00:04:49.177 "rw_mbytes_per_sec": 0, 00:04:49.177 "r_mbytes_per_sec": 0, 00:04:49.177 "w_mbytes_per_sec": 0 00:04:49.177 }, 00:04:49.177 "claimed": false, 00:04:49.177 "zoned": false, 00:04:49.177 "supported_io_types": { 00:04:49.177 "read": true, 00:04:49.177 "write": true, 00:04:49.177 "unmap": true, 00:04:49.177 "flush": true, 00:04:49.177 "reset": true, 00:04:49.177 "nvme_admin": false, 00:04:49.177 "nvme_io": false, 00:04:49.177 "nvme_io_md": false, 00:04:49.177 "write_zeroes": true, 00:04:49.177 "zcopy": true, 00:04:49.177 "get_zone_info": false, 00:04:49.177 "zone_management": false, 00:04:49.177 "zone_append": false, 00:04:49.177 "compare": false, 00:04:49.177 "compare_and_write": false, 00:04:49.177 "abort": true, 00:04:49.177 "seek_hole": false, 00:04:49.177 "seek_data": false, 00:04:49.177 "copy": true, 00:04:49.177 "nvme_iov_md": false 00:04:49.177 }, 00:04:49.177 "memory_domains": [ 00:04:49.177 { 00:04:49.177 "dma_device_id": "system", 00:04:49.177 "dma_device_type": 1 00:04:49.177 }, 00:04:49.177 { 00:04:49.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.177 "dma_device_type": 2 00:04:49.177 } 00:04:49.177 ], 00:04:49.177 "driver_specific": {} 00:04:49.177 } 00:04:49.177 ]' 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.177 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.177 [2024-12-16 16:10:37.588927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:49.177 [2024-12-16 16:10:37.588952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.177 [2024-12-16 16:10:37.588966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e1df80 00:04:49.177 [2024-12-16 16:10:37.588972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.177 [2024-12-16 16:10:37.589921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.178 [2024-12-16 16:10:37.589940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.178 Passthru0 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.178 { 00:04:49.178 "name": "Malloc2", 00:04:49.178 "aliases": [ 00:04:49.178 "0a80de7e-b432-480d-a549-9dfcbbfb7104" 00:04:49.178 ], 00:04:49.178 "product_name": "Malloc disk", 00:04:49.178 "block_size": 512, 00:04:49.178 "num_blocks": 16384, 00:04:49.178 "uuid": "0a80de7e-b432-480d-a549-9dfcbbfb7104", 00:04:49.178 "assigned_rate_limits": { 00:04:49.178 "rw_ios_per_sec": 0, 00:04:49.178 "rw_mbytes_per_sec": 0, 00:04:49.178 "r_mbytes_per_sec": 0, 00:04:49.178 "w_mbytes_per_sec": 0 00:04:49.178 }, 00:04:49.178 "claimed": true, 00:04:49.178 "claim_type": "exclusive_write", 00:04:49.178 "zoned": false, 00:04:49.178 "supported_io_types": { 00:04:49.178 "read": true, 00:04:49.178 "write": true, 00:04:49.178 "unmap": true, 00:04:49.178 "flush": true, 00:04:49.178 "reset": true, 00:04:49.178 "nvme_admin": false, 00:04:49.178 "nvme_io": false, 00:04:49.178 "nvme_io_md": false, 00:04:49.178 "write_zeroes": true, 00:04:49.178 "zcopy": true, 00:04:49.178 "get_zone_info": false, 00:04:49.178 "zone_management": false, 00:04:49.178 "zone_append": false, 00:04:49.178 "compare": false, 00:04:49.178 "compare_and_write": false, 00:04:49.178 "abort": true, 00:04:49.178 "seek_hole": false, 00:04:49.178 "seek_data": false, 00:04:49.178 "copy": true, 00:04:49.178 "nvme_iov_md": false 00:04:49.178 }, 00:04:49.178 "memory_domains": [ 00:04:49.178 { 00:04:49.178 "dma_device_id": "system", 00:04:49.178 "dma_device_type": 1 00:04:49.178 }, 00:04:49.178 { 00:04:49.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.178 "dma_device_type": 2 00:04:49.178 } 00:04:49.178 ], 00:04:49.178 "driver_specific": {} 00:04:49.178 }, 00:04:49.178 { 00:04:49.178 "name": "Passthru0", 00:04:49.178 "aliases": [ 00:04:49.178 "d41f47b9-0d99-598d-90df-73c5693e48c9" 00:04:49.178 ], 00:04:49.178 "product_name": "passthru", 00:04:49.178 "block_size": 512, 00:04:49.178 "num_blocks": 16384, 00:04:49.178 "uuid": "d41f47b9-0d99-598d-90df-73c5693e48c9", 00:04:49.178 "assigned_rate_limits": { 00:04:49.178 "rw_ios_per_sec": 0, 00:04:49.178 "rw_mbytes_per_sec": 0, 00:04:49.178 "r_mbytes_per_sec": 0, 00:04:49.178 "w_mbytes_per_sec": 0 00:04:49.178 }, 00:04:49.178 "claimed": false, 00:04:49.178 "zoned": false, 00:04:49.178 "supported_io_types": { 00:04:49.178 "read": true, 00:04:49.178 "write": true, 00:04:49.178 "unmap": true, 00:04:49.178 "flush": true, 00:04:49.178 "reset": true, 00:04:49.178 "nvme_admin": false, 00:04:49.178 "nvme_io": false, 00:04:49.178 "nvme_io_md": false, 00:04:49.178 "write_zeroes": true, 00:04:49.178 "zcopy": true, 00:04:49.178 "get_zone_info": false, 00:04:49.178 "zone_management": false, 00:04:49.178 "zone_append": false, 00:04:49.178 "compare": false, 00:04:49.178 "compare_and_write": false, 00:04:49.178 "abort": true, 00:04:49.178 "seek_hole": false, 00:04:49.178 "seek_data": false, 00:04:49.178 "copy": true, 00:04:49.178 "nvme_iov_md": false 00:04:49.178 }, 00:04:49.178 "memory_domains": [ 00:04:49.178 { 00:04:49.178 "dma_device_id": "system", 00:04:49.178 "dma_device_type": 1 00:04:49.178 }, 00:04:49.178 { 00:04:49.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.178 "dma_device_type": 2 00:04:49.178 } 00:04:49.178 ], 00:04:49.178 "driver_specific": { 00:04:49.178 "passthru": { 00:04:49.178 "name": "Passthru0", 00:04:49.178 "base_bdev_name": "Malloc2" 00:04:49.178 } 00:04:49.178 } 00:04:49.178 } 00:04:49.178 ]' 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.178 00:04:49.178 real 0m0.267s 00:04:49.178 user 0m0.168s 00:04:49.178 sys 0m0.035s 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.178 16:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.178 ************************************ 00:04:49.178 END TEST rpc_daemon_integrity 00:04:49.178 ************************************ 00:04:49.178 16:10:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:49.178 16:10:37 rpc -- rpc/rpc.sh@84 -- # killprocess 771473 00:04:49.178 16:10:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 771473 ']' 00:04:49.178 16:10:37 rpc -- common/autotest_common.sh@958 -- # kill -0 771473 00:04:49.178 16:10:37 rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.178 16:10:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.178 16:10:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771473 00:04:49.437 16:10:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.437 16:10:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.437 16:10:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771473' 00:04:49.437 killing process with pid 771473 00:04:49.437 16:10:37 rpc -- common/autotest_common.sh@973 -- # kill 771473 00:04:49.437 16:10:37 rpc -- common/autotest_common.sh@978 -- # wait 771473 00:04:49.697 00:04:49.697 real 0m2.060s 00:04:49.697 user 0m2.644s 00:04:49.697 sys 0m0.689s 00:04:49.697 16:10:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.697 16:10:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.697 ************************************ 00:04:49.697 END TEST rpc 00:04:49.697 ************************************ 00:04:49.697 16:10:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:49.697 16:10:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.697 16:10:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.697 16:10:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.697 ************************************ 00:04:49.697 START TEST skip_rpc 00:04:49.697 ************************************ 00:04:49.697 16:10:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:49.697 * Looking for test storage... 00:04:49.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.697 16:10:38 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.697 16:10:38 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.697 16:10:38 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.956 16:10:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.956 --rc genhtml_branch_coverage=1 00:04:49.956 --rc genhtml_function_coverage=1 00:04:49.956 --rc genhtml_legend=1 00:04:49.956 --rc geninfo_all_blocks=1 00:04:49.956 --rc geninfo_unexecuted_blocks=1 00:04:49.956 00:04:49.956 ' 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.956 --rc genhtml_branch_coverage=1 00:04:49.956 --rc genhtml_function_coverage=1 00:04:49.956 --rc genhtml_legend=1 00:04:49.956 --rc geninfo_all_blocks=1 00:04:49.956 --rc geninfo_unexecuted_blocks=1 00:04:49.956 00:04:49.956 ' 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.956 --rc genhtml_branch_coverage=1 00:04:49.956 --rc genhtml_function_coverage=1 00:04:49.956 --rc genhtml_legend=1 00:04:49.956 --rc geninfo_all_blocks=1 00:04:49.956 --rc geninfo_unexecuted_blocks=1 00:04:49.956 00:04:49.956 ' 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.956 --rc genhtml_branch_coverage=1 00:04:49.956 --rc genhtml_function_coverage=1 00:04:49.956 --rc genhtml_legend=1 00:04:49.956 --rc geninfo_all_blocks=1 00:04:49.956 --rc geninfo_unexecuted_blocks=1 00:04:49.956 00:04:49.956 ' 00:04:49.956 16:10:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.956 16:10:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.956 16:10:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.956 16:10:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.956 ************************************ 00:04:49.956 START TEST skip_rpc 00:04:49.956 ************************************ 00:04:49.956 16:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:49.956 16:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=772096 00:04:49.956 16:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.956 16:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:49.956 16:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:49.956 [2024-12-16 16:10:38.445350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:49.956 [2024-12-16 16:10:38.445385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772096 ] 00:04:49.956 [2024-12-16 16:10:38.518237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.956 [2024-12-16 16:10:38.540207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 772096 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 772096 ']' 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 772096 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772096 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772096' 00:04:55.225 killing process with pid 772096 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 772096 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 772096 00:04:55.225 00:04:55.225 real 0m5.351s 00:04:55.225 user 0m5.124s 00:04:55.225 sys 0m0.264s 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.225 16:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.225 ************************************ 00:04:55.225 END TEST skip_rpc 00:04:55.225 ************************************ 00:04:55.225 16:10:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:55.225 16:10:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.225 16:10:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.225 16:10:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.225 ************************************ 00:04:55.225 START TEST skip_rpc_with_json 00:04:55.225 ************************************ 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=773018 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 773018 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 773018 ']' 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.225 16:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.484 [2024-12-16 16:10:43.872504] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:55.484 [2024-12-16 16:10:43.872544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773018 ] 00:04:55.484 [2024-12-16 16:10:43.947775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.484 [2024-12-16 16:10:43.970333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.743 [2024-12-16 16:10:44.180599] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:55.743 request: 00:04:55.743 { 00:04:55.743 "trtype": "tcp", 00:04:55.743 "method": "nvmf_get_transports", 00:04:55.743 "req_id": 1 00:04:55.743 } 00:04:55.743 Got JSON-RPC error response 00:04:55.743 response: 00:04:55.743 { 00:04:55.743 "code": -19, 00:04:55.743 "message": "No such device" 00:04:55.743 } 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.743 [2024-12-16 16:10:44.192700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.743 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.002 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.002 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.002 { 00:04:56.002 "subsystems": [ 00:04:56.002 { 00:04:56.002 "subsystem": "fsdev", 00:04:56.002 "config": [ 00:04:56.002 { 00:04:56.002 "method": "fsdev_set_opts", 00:04:56.002 "params": { 00:04:56.002 "fsdev_io_pool_size": 65535, 00:04:56.002 "fsdev_io_cache_size": 256 00:04:56.002 } 00:04:56.002 } 00:04:56.002 ] 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "subsystem": "vfio_user_target", 00:04:56.002 "config": null 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "subsystem": "keyring", 00:04:56.002 "config": [] 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "subsystem": "iobuf", 00:04:56.002 "config": [ 00:04:56.002 { 00:04:56.002 "method": "iobuf_set_options", 00:04:56.002 "params": { 00:04:56.002 "small_pool_count": 8192, 00:04:56.002 "large_pool_count": 1024, 00:04:56.002 "small_bufsize": 8192, 00:04:56.002 "large_bufsize": 135168, 00:04:56.002 "enable_numa": false 00:04:56.002 } 00:04:56.002 } 00:04:56.002 ] 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "subsystem": "sock", 00:04:56.002 "config": [ 00:04:56.002 { 00:04:56.002 "method": "sock_set_default_impl", 00:04:56.002 "params": { 00:04:56.002 "impl_name": "posix" 00:04:56.002 } 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "method": "sock_impl_set_options", 00:04:56.002 "params": { 00:04:56.002 "impl_name": "ssl", 00:04:56.002 "recv_buf_size": 4096, 00:04:56.002 "send_buf_size": 4096, 00:04:56.002 "enable_recv_pipe": true, 00:04:56.002 "enable_quickack": false, 00:04:56.002 "enable_placement_id": 0, 00:04:56.002 "enable_zerocopy_send_server": true, 00:04:56.002 "enable_zerocopy_send_client": false, 00:04:56.002 "zerocopy_threshold": 0, 00:04:56.002 "tls_version": 0, 00:04:56.002 "enable_ktls": false 00:04:56.002 } 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "method": "sock_impl_set_options", 00:04:56.002 "params": { 00:04:56.002 "impl_name": "posix", 00:04:56.002 "recv_buf_size": 2097152, 00:04:56.002 "send_buf_size": 2097152, 00:04:56.002 "enable_recv_pipe": true, 00:04:56.002 "enable_quickack": false, 00:04:56.002 "enable_placement_id": 0, 00:04:56.002 "enable_zerocopy_send_server": true, 00:04:56.002 "enable_zerocopy_send_client": false, 00:04:56.002 "zerocopy_threshold": 0, 00:04:56.002 "tls_version": 0, 00:04:56.002 "enable_ktls": false 00:04:56.002 } 00:04:56.002 } 00:04:56.002 ] 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "subsystem": "vmd", 00:04:56.002 "config": [] 00:04:56.002 }, 00:04:56.002 { 00:04:56.002 "subsystem": "accel", 00:04:56.002 "config": [ 00:04:56.002 { 00:04:56.002 "method": "accel_set_options", 00:04:56.002 "params": { 00:04:56.002 "small_cache_size": 128, 00:04:56.002 "large_cache_size": 16, 00:04:56.002 "task_count": 2048, 00:04:56.002 "sequence_count": 2048, 00:04:56.002 "buf_count": 2048 00:04:56.002 } 00:04:56.002 } 00:04:56.002 ] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "bdev", 00:04:56.003 "config": [ 00:04:56.003 { 00:04:56.003 "method": "bdev_set_options", 00:04:56.003 "params": { 00:04:56.003 "bdev_io_pool_size": 65535, 00:04:56.003 "bdev_io_cache_size": 256, 00:04:56.003 "bdev_auto_examine": true, 00:04:56.003 "iobuf_small_cache_size": 128, 00:04:56.003 "iobuf_large_cache_size": 16 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "bdev_raid_set_options", 00:04:56.003 "params": { 00:04:56.003 "process_window_size_kb": 1024, 00:04:56.003 "process_max_bandwidth_mb_sec": 0 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "bdev_iscsi_set_options", 00:04:56.003 "params": { 00:04:56.003 "timeout_sec": 30 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "bdev_nvme_set_options", 00:04:56.003 "params": { 00:04:56.003 "action_on_timeout": "none", 00:04:56.003 "timeout_us": 0, 00:04:56.003 "timeout_admin_us": 0, 00:04:56.003 "keep_alive_timeout_ms": 10000, 00:04:56.003 "arbitration_burst": 0, 00:04:56.003 "low_priority_weight": 0, 00:04:56.003 "medium_priority_weight": 0, 00:04:56.003 "high_priority_weight": 0, 00:04:56.003 "nvme_adminq_poll_period_us": 10000, 00:04:56.003 "nvme_ioq_poll_period_us": 0, 00:04:56.003 "io_queue_requests": 0, 00:04:56.003 "delay_cmd_submit": true, 00:04:56.003 "transport_retry_count": 4, 00:04:56.003 "bdev_retry_count": 3, 00:04:56.003 "transport_ack_timeout": 0, 00:04:56.003 "ctrlr_loss_timeout_sec": 0, 00:04:56.003 "reconnect_delay_sec": 0, 00:04:56.003 "fast_io_fail_timeout_sec": 0, 00:04:56.003 "disable_auto_failback": false, 00:04:56.003 "generate_uuids": false, 00:04:56.003 "transport_tos": 0, 00:04:56.003 "nvme_error_stat": false, 00:04:56.003 "rdma_srq_size": 0, 00:04:56.003 "io_path_stat": false, 00:04:56.003 "allow_accel_sequence": false, 00:04:56.003 "rdma_max_cq_size": 0, 00:04:56.003 "rdma_cm_event_timeout_ms": 0, 00:04:56.003 "dhchap_digests": [ 00:04:56.003 "sha256", 00:04:56.003 "sha384", 00:04:56.003 "sha512" 00:04:56.003 ], 00:04:56.003 "dhchap_dhgroups": [ 00:04:56.003 "null", 00:04:56.003 "ffdhe2048", 00:04:56.003 "ffdhe3072", 00:04:56.003 "ffdhe4096", 00:04:56.003 "ffdhe6144", 00:04:56.003 "ffdhe8192" 00:04:56.003 ], 00:04:56.003 "rdma_umr_per_io": false 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "bdev_nvme_set_hotplug", 00:04:56.003 "params": { 00:04:56.003 "period_us": 100000, 00:04:56.003 "enable": false 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "bdev_wait_for_examine" 00:04:56.003 } 00:04:56.003 ] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "scsi", 00:04:56.003 "config": null 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "scheduler", 00:04:56.003 "config": [ 00:04:56.003 { 00:04:56.003 "method": "framework_set_scheduler", 00:04:56.003 "params": { 00:04:56.003 "name": "static" 00:04:56.003 } 00:04:56.003 } 00:04:56.003 ] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "vhost_scsi", 00:04:56.003 "config": [] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "vhost_blk", 00:04:56.003 "config": [] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "ublk", 00:04:56.003 "config": [] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "nbd", 00:04:56.003 "config": [] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "nvmf", 00:04:56.003 "config": [ 00:04:56.003 { 00:04:56.003 "method": "nvmf_set_config", 00:04:56.003 "params": { 00:04:56.003 "discovery_filter": "match_any", 00:04:56.003 "admin_cmd_passthru": { 00:04:56.003 "identify_ctrlr": false 00:04:56.003 }, 00:04:56.003 "dhchap_digests": [ 00:04:56.003 "sha256", 00:04:56.003 "sha384", 00:04:56.003 "sha512" 00:04:56.003 ], 00:04:56.003 "dhchap_dhgroups": [ 00:04:56.003 "null", 00:04:56.003 "ffdhe2048", 00:04:56.003 "ffdhe3072", 00:04:56.003 "ffdhe4096", 00:04:56.003 "ffdhe6144", 00:04:56.003 "ffdhe8192" 00:04:56.003 ] 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "nvmf_set_max_subsystems", 00:04:56.003 "params": { 00:04:56.003 "max_subsystems": 1024 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "nvmf_set_crdt", 00:04:56.003 "params": { 00:04:56.003 "crdt1": 0, 00:04:56.003 "crdt2": 0, 00:04:56.003 "crdt3": 0 00:04:56.003 } 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "method": "nvmf_create_transport", 00:04:56.003 "params": { 00:04:56.003 "trtype": "TCP", 00:04:56.003 "max_queue_depth": 128, 00:04:56.003 "max_io_qpairs_per_ctrlr": 127, 00:04:56.003 "in_capsule_data_size": 4096, 00:04:56.003 "max_io_size": 131072, 00:04:56.003 "io_unit_size": 131072, 00:04:56.003 "max_aq_depth": 128, 00:04:56.003 "num_shared_buffers": 511, 00:04:56.003 "buf_cache_size": 4294967295, 00:04:56.003 "dif_insert_or_strip": false, 00:04:56.003 "zcopy": false, 00:04:56.003 "c2h_success": true, 00:04:56.003 "sock_priority": 0, 00:04:56.003 "abort_timeout_sec": 1, 00:04:56.003 "ack_timeout": 0, 00:04:56.003 "data_wr_pool_size": 0 00:04:56.003 } 00:04:56.003 } 00:04:56.003 ] 00:04:56.003 }, 00:04:56.003 { 00:04:56.003 "subsystem": "iscsi", 00:04:56.003 "config": [ 00:04:56.003 { 00:04:56.003 "method": "iscsi_set_options", 00:04:56.003 "params": { 00:04:56.003 "node_base": "iqn.2016-06.io.spdk", 00:04:56.003 "max_sessions": 128, 00:04:56.003 "max_connections_per_session": 2, 00:04:56.003 "max_queue_depth": 64, 00:04:56.003 "default_time2wait": 2, 00:04:56.003 "default_time2retain": 20, 00:04:56.003 "first_burst_length": 8192, 00:04:56.003 "immediate_data": true, 00:04:56.003 "allow_duplicated_isid": false, 00:04:56.003 "error_recovery_level": 0, 00:04:56.003 "nop_timeout": 60, 00:04:56.003 "nop_in_interval": 30, 00:04:56.003 "disable_chap": false, 00:04:56.003 "require_chap": false, 00:04:56.003 "mutual_chap": false, 00:04:56.003 "chap_group": 0, 00:04:56.003 "max_large_datain_per_connection": 64, 00:04:56.003 "max_r2t_per_connection": 4, 00:04:56.003 "pdu_pool_size": 36864, 00:04:56.003 "immediate_data_pool_size": 16384, 00:04:56.003 "data_out_pool_size": 2048 00:04:56.003 } 00:04:56.003 } 00:04:56.003 ] 00:04:56.003 } 00:04:56.003 ] 00:04:56.003 } 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 773018 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773018 ']' 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773018 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773018 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773018' 00:04:56.003 killing process with pid 773018 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773018 00:04:56.003 16:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773018 00:04:56.262 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=773148 00:04:56.262 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.262 16:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 773148 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773148 ']' 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773148 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773148 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773148' 00:05:01.658 killing process with pid 773148 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773148 00:05:01.658 16:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773148 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.658 00:05:01.658 real 0m6.252s 00:05:01.658 user 0m5.942s 00:05:01.658 sys 0m0.600s 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.658 ************************************ 00:05:01.658 END TEST skip_rpc_with_json 00:05:01.658 ************************************ 00:05:01.658 16:10:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:01.658 16:10:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.658 16:10:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.658 16:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.658 ************************************ 00:05:01.658 START TEST skip_rpc_with_delay 00:05:01.658 ************************************ 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.658 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.659 [2024-12-16 16:10:50.199431] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.659 00:05:01.659 real 0m0.071s 00:05:01.659 user 0m0.039s 00:05:01.659 sys 0m0.031s 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.659 16:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:01.659 ************************************ 00:05:01.659 END TEST skip_rpc_with_delay 00:05:01.659 ************************************ 00:05:01.659 16:10:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:01.659 16:10:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:01.659 16:10:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:01.659 16:10:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.659 16:10:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.659 16:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.918 ************************************ 00:05:01.918 START TEST exit_on_failed_rpc_init 00:05:01.918 ************************************ 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=774207 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 774207 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 774207 ']' 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.918 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.918 [2024-12-16 16:10:50.332061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:01.918 [2024-12-16 16:10:50.332107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774207 ] 00:05:01.918 [2024-12-16 16:10:50.406850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.918 [2024-12-16 16:10:50.429147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:02.177 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.177 [2024-12-16 16:10:50.693451] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:02.177 [2024-12-16 16:10:50.693490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774213 ] 00:05:02.177 [2024-12-16 16:10:50.766334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.437 [2024-12-16 16:10:50.788602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.437 [2024-12-16 16:10:50.788657] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:02.437 [2024-12-16 16:10:50.788667] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:02.437 [2024-12-16 16:10:50.788673] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 774207 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 774207 ']' 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 774207 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774207 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774207' 00:05:02.437 killing process with pid 774207 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 774207 00:05:02.437 16:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 774207 00:05:02.696 00:05:02.696 real 0m0.888s 00:05:02.696 user 0m0.931s 00:05:02.696 sys 0m0.381s 00:05:02.696 16:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.696 16:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.696 ************************************ 00:05:02.696 END TEST exit_on_failed_rpc_init 00:05:02.696 ************************************ 00:05:02.696 16:10:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.696 00:05:02.696 real 0m13.023s 00:05:02.696 user 0m12.249s 00:05:02.696 sys 0m1.554s 00:05:02.696 16:10:51 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.696 16:10:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.696 ************************************ 00:05:02.696 END TEST skip_rpc 00:05:02.696 ************************************ 00:05:02.696 16:10:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:02.696 16:10:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.696 16:10:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.696 16:10:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.696 ************************************ 00:05:02.696 START TEST rpc_client 00:05:02.696 ************************************ 00:05:02.696 16:10:51 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:02.956 * Looking for test storage... 00:05:02.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.956 16:10:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.956 --rc genhtml_branch_coverage=1 00:05:02.956 --rc genhtml_function_coverage=1 00:05:02.956 --rc genhtml_legend=1 00:05:02.956 --rc geninfo_all_blocks=1 00:05:02.956 --rc geninfo_unexecuted_blocks=1 00:05:02.956 00:05:02.956 ' 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.956 --rc genhtml_branch_coverage=1 00:05:02.956 --rc genhtml_function_coverage=1 00:05:02.956 --rc genhtml_legend=1 00:05:02.956 --rc geninfo_all_blocks=1 00:05:02.956 --rc geninfo_unexecuted_blocks=1 00:05:02.956 00:05:02.956 ' 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.956 --rc genhtml_branch_coverage=1 00:05:02.956 --rc genhtml_function_coverage=1 00:05:02.956 --rc genhtml_legend=1 00:05:02.956 --rc geninfo_all_blocks=1 00:05:02.956 --rc geninfo_unexecuted_blocks=1 00:05:02.956 00:05:02.956 ' 00:05:02.956 16:10:51 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.956 --rc genhtml_branch_coverage=1 00:05:02.956 --rc genhtml_function_coverage=1 00:05:02.956 --rc genhtml_legend=1 00:05:02.956 --rc geninfo_all_blocks=1 00:05:02.956 --rc geninfo_unexecuted_blocks=1 00:05:02.956 00:05:02.956 ' 00:05:02.956 16:10:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:02.956 OK 00:05:02.957 16:10:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.957 00:05:02.957 real 0m0.209s 00:05:02.957 user 0m0.127s 00:05:02.957 sys 0m0.094s 00:05:02.957 16:10:51 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.957 16:10:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.957 ************************************ 00:05:02.957 END TEST rpc_client 00:05:02.957 ************************************ 00:05:02.957 16:10:51 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:02.957 16:10:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.957 16:10:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.957 16:10:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.957 ************************************ 00:05:02.957 START TEST json_config 00:05:02.957 ************************************ 00:05:02.957 16:10:51 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.216 16:10:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.216 16:10:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.216 16:10:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.216 16:10:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.216 16:10:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.216 16:10:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.216 16:10:51 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.216 16:10:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.216 16:10:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.216 16:10:51 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.216 16:10:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.216 16:10:51 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.216 16:10:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.216 16:10:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.216 16:10:51 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.216 --rc genhtml_branch_coverage=1 00:05:03.216 --rc genhtml_function_coverage=1 00:05:03.216 --rc genhtml_legend=1 00:05:03.216 --rc geninfo_all_blocks=1 00:05:03.216 --rc geninfo_unexecuted_blocks=1 00:05:03.216 00:05:03.216 ' 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.216 --rc genhtml_branch_coverage=1 00:05:03.216 --rc genhtml_function_coverage=1 00:05:03.216 --rc genhtml_legend=1 00:05:03.216 --rc geninfo_all_blocks=1 00:05:03.216 --rc geninfo_unexecuted_blocks=1 00:05:03.216 00:05:03.216 ' 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.216 --rc genhtml_branch_coverage=1 00:05:03.216 --rc genhtml_function_coverage=1 00:05:03.216 --rc genhtml_legend=1 00:05:03.216 --rc geninfo_all_blocks=1 00:05:03.216 --rc geninfo_unexecuted_blocks=1 00:05:03.216 00:05:03.216 ' 00:05:03.216 16:10:51 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.217 --rc genhtml_branch_coverage=1 00:05:03.217 --rc genhtml_function_coverage=1 00:05:03.217 --rc genhtml_legend=1 00:05:03.217 --rc geninfo_all_blocks=1 00:05:03.217 --rc geninfo_unexecuted_blocks=1 00:05:03.217 00:05:03.217 ' 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.217 16:10:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.217 16:10:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.217 16:10:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.217 16:10:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.217 16:10:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.217 16:10:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.217 16:10:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.217 16:10:51 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.217 16:10:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.217 16:10:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:03.217 INFO: JSON configuration test init 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.217 16:10:51 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:03.217 16:10:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.217 16:10:51 json_config -- json_config/common.sh@10 -- # shift 00:05:03.217 16:10:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.217 16:10:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.217 16:10:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.217 16:10:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.217 16:10:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.217 16:10:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=774559 00:05:03.217 16:10:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.217 Waiting for target to run... 00:05:03.217 16:10:51 json_config -- json_config/common.sh@25 -- # waitforlisten 774559 /var/tmp/spdk_tgt.sock 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 774559 ']' 00:05:03.217 16:10:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.217 16:10:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.217 [2024-12-16 16:10:51.801248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:03.217 [2024-12-16 16:10:51.801295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774559 ] 00:05:03.785 [2024-12-16 16:10:52.091903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.785 [2024-12-16 16:10:52.105215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.042 16:10:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.042 16:10:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:04.042 16:10:52 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.042 00:05:04.042 16:10:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:04.042 16:10:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:04.042 16:10:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.042 16:10:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.042 16:10:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:04.042 16:10:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:04.042 16:10:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.042 16:10:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.299 16:10:52 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:04.299 16:10:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:04.299 16:10:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:07.586 16:10:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.586 16:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:07.586 16:10:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@54 -- # sort 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:07.586 16:10:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.586 16:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:07.586 16:10:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.586 16:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:07.586 16:10:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.586 16:10:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.586 MallocForNvmf0 00:05:07.586 16:10:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.586 16:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.845 MallocForNvmf1 00:05:07.845 16:10:56 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:07.845 16:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.104 [2024-12-16 16:10:56.531472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.104 16:10:56 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.104 16:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.362 16:10:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.362 16:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.362 16:10:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:08.362 16:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:08.621 16:10:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:08.621 16:10:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:08.879 [2024-12-16 16:10:57.321833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.879 16:10:57 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:08.879 16:10:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.879 16:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.879 16:10:57 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:08.879 16:10:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.879 16:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.879 16:10:57 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:08.879 16:10:57 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.879 16:10:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.138 MallocBdevForConfigChangeCheck 00:05:09.138 16:10:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:09.138 16:10:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.138 16:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.138 16:10:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:09.138 16:10:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.396 16:10:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:09.396 INFO: shutting down applications... 00:05:09.396 16:10:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:09.396 16:10:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:09.396 16:10:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:09.396 16:10:57 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.299 Calling clear_iscsi_subsystem 00:05:11.299 Calling clear_nvmf_subsystem 00:05:11.299 Calling clear_nbd_subsystem 00:05:11.299 Calling clear_ublk_subsystem 00:05:11.299 Calling clear_vhost_blk_subsystem 00:05:11.299 Calling clear_vhost_scsi_subsystem 00:05:11.299 Calling clear_bdev_subsystem 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@352 -- # break 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:11.299 16:10:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:11.299 16:10:59 json_config -- json_config/common.sh@31 -- # local app=target 00:05:11.299 16:10:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.299 16:10:59 json_config -- json_config/common.sh@35 -- # [[ -n 774559 ]] 00:05:11.299 16:10:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 774559 00:05:11.299 16:10:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.299 16:10:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.299 16:10:59 json_config -- json_config/common.sh@41 -- # kill -0 774559 00:05:11.299 16:10:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.866 16:11:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.866 16:11:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.866 16:11:00 json_config -- json_config/common.sh@41 -- # kill -0 774559 00:05:11.866 16:11:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.866 16:11:00 json_config -- json_config/common.sh@43 -- # break 00:05:11.866 16:11:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.866 16:11:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.866 SPDK target shutdown done 00:05:11.866 16:11:00 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:11.866 INFO: relaunching applications... 00:05:11.866 16:11:00 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.866 16:11:00 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.866 16:11:00 json_config -- json_config/common.sh@10 -- # shift 00:05:11.866 16:11:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.866 16:11:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.866 16:11:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.866 16:11:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.866 16:11:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.866 16:11:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=776042 00:05:11.866 16:11:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.866 Waiting for target to run... 00:05:11.867 16:11:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.867 16:11:00 json_config -- json_config/common.sh@25 -- # waitforlisten 776042 /var/tmp/spdk_tgt.sock 00:05:11.867 16:11:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 776042 ']' 00:05:11.867 16:11:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.867 16:11:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.867 16:11:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.867 16:11:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.867 16:11:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.867 [2024-12-16 16:11:00.457904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:11.867 [2024-12-16 16:11:00.457958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776042 ] 00:05:12.434 [2024-12-16 16:11:00.924326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.434 [2024-12-16 16:11:00.945295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.721 [2024-12-16 16:11:03.952953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.721 [2024-12-16 16:11:03.985250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.288 16:11:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.288 16:11:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.288 16:11:04 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.288 00:05:16.288 16:11:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:16.288 16:11:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:16.288 INFO: Checking if target configuration is the same... 00:05:16.288 16:11:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:16.288 16:11:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.288 16:11:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.289 + '[' 2 -ne 2 ']' 00:05:16.289 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.289 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.289 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.289 +++ basename /dev/fd/62 00:05:16.289 ++ mktemp /tmp/62.XXX 00:05:16.289 + tmp_file_1=/tmp/62.zwT 00:05:16.289 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.289 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.289 + tmp_file_2=/tmp/spdk_tgt_config.json.a9V 00:05:16.289 + ret=0 00:05:16.289 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.547 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.547 + diff -u /tmp/62.zwT /tmp/spdk_tgt_config.json.a9V 00:05:16.547 + echo 'INFO: JSON config files are the same' 00:05:16.547 INFO: JSON config files are the same 00:05:16.547 + rm /tmp/62.zwT /tmp/spdk_tgt_config.json.a9V 00:05:16.547 + exit 0 00:05:16.547 16:11:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:16.547 16:11:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:16.547 INFO: changing configuration and checking if this can be detected... 00:05:16.547 16:11:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.547 16:11:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.805 16:11:05 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.805 16:11:05 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:16.805 16:11:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.805 + '[' 2 -ne 2 ']' 00:05:16.805 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.805 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.805 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.805 +++ basename /dev/fd/62 00:05:16.805 ++ mktemp /tmp/62.XXX 00:05:16.805 + tmp_file_1=/tmp/62.znO 00:05:16.805 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.805 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.805 + tmp_file_2=/tmp/spdk_tgt_config.json.Qws 00:05:16.805 + ret=0 00:05:16.805 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:17.063 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:17.063 + diff -u /tmp/62.znO /tmp/spdk_tgt_config.json.Qws 00:05:17.063 + ret=1 00:05:17.063 + echo '=== Start of file: /tmp/62.znO ===' 00:05:17.063 + cat /tmp/62.znO 00:05:17.063 + echo '=== End of file: /tmp/62.znO ===' 00:05:17.063 + echo '' 00:05:17.063 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Qws ===' 00:05:17.063 + cat /tmp/spdk_tgt_config.json.Qws 00:05:17.063 + echo '=== End of file: /tmp/spdk_tgt_config.json.Qws ===' 00:05:17.063 + echo '' 00:05:17.063 + rm /tmp/62.znO /tmp/spdk_tgt_config.json.Qws 00:05:17.063 + exit 1 00:05:17.063 16:11:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:17.063 INFO: configuration change detected. 00:05:17.063 16:11:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:17.063 16:11:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:17.063 16:11:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.063 16:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.063 16:11:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 776042 ]] 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.322 16:11:05 json_config -- json_config/json_config.sh@330 -- # killprocess 776042 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@954 -- # '[' -z 776042 ']' 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@958 -- # kill -0 776042 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@959 -- # uname 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776042 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776042' 00:05:17.322 killing process with pid 776042 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@973 -- # kill 776042 00:05:17.322 16:11:05 json_config -- common/autotest_common.sh@978 -- # wait 776042 00:05:18.698 16:11:07 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.698 16:11:07 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:18.698 16:11:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.698 16:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.958 16:11:07 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:18.958 16:11:07 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:18.958 INFO: Success 00:05:18.958 00:05:18.958 real 0m15.758s 00:05:18.958 user 0m16.944s 00:05:18.958 sys 0m1.958s 00:05:18.958 16:11:07 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.958 16:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.958 ************************************ 00:05:18.958 END TEST json_config 00:05:18.958 ************************************ 00:05:18.958 16:11:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.958 16:11:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.958 16:11:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.958 16:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.958 ************************************ 00:05:18.958 START TEST json_config_extra_key 00:05:18.958 ************************************ 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 16:11:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.958 --rc genhtml_branch_coverage=1 00:05:18.958 --rc genhtml_function_coverage=1 00:05:18.958 --rc genhtml_legend=1 00:05:18.958 --rc geninfo_all_blocks=1 00:05:18.958 --rc geninfo_unexecuted_blocks=1 00:05:18.958 00:05:18.958 ' 00:05:18.958 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.958 16:11:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.958 16:11:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.958 16:11:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.958 16:11:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.958 16:11:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.958 16:11:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.958 16:11:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.958 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:18.958 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.959 INFO: launching applications... 00:05:18.959 16:11:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.959 16:11:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.959 16:11:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=777331 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.218 Waiting for target to run... 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 777331 /var/tmp/spdk_tgt.sock 00:05:19.218 16:11:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 777331 ']' 00:05:19.218 16:11:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.218 16:11:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.218 16:11:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.218 16:11:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.218 16:11:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.218 16:11:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.218 [2024-12-16 16:11:07.618440] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:19.218 [2024-12-16 16:11:07.618494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777331 ] 00:05:19.477 [2024-12-16 16:11:08.079866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.735 [2024-12-16 16:11:08.101267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.994 16:11:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.994 16:11:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.994 00:05:19.994 16:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.994 INFO: shutting down applications... 00:05:19.994 16:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 777331 ]] 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 777331 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 777331 00:05:19.994 16:11:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 777331 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.562 16:11:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.562 SPDK target shutdown done 00:05:20.562 16:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.562 Success 00:05:20.562 00:05:20.562 real 0m1.578s 00:05:20.562 user 0m1.178s 00:05:20.562 sys 0m0.570s 00:05:20.562 16:11:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.562 16:11:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.562 ************************************ 00:05:20.562 END TEST json_config_extra_key 00:05:20.562 ************************************ 00:05:20.562 16:11:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.562 16:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.562 16:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.562 16:11:08 -- common/autotest_common.sh@10 -- # set +x 00:05:20.562 ************************************ 00:05:20.562 START TEST alias_rpc 00:05:20.562 ************************************ 00:05:20.562 16:11:09 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.562 * Looking for test storage... 00:05:20.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:20.562 16:11:09 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.562 16:11:09 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.562 16:11:09 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.821 16:11:09 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.821 16:11:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.821 16:11:09 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.821 16:11:09 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.821 --rc genhtml_branch_coverage=1 00:05:20.821 --rc genhtml_function_coverage=1 00:05:20.821 --rc genhtml_legend=1 00:05:20.821 --rc geninfo_all_blocks=1 00:05:20.821 --rc geninfo_unexecuted_blocks=1 00:05:20.821 00:05:20.821 ' 00:05:20.821 16:11:09 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.821 --rc genhtml_branch_coverage=1 00:05:20.821 --rc genhtml_function_coverage=1 00:05:20.821 --rc genhtml_legend=1 00:05:20.821 --rc geninfo_all_blocks=1 00:05:20.821 --rc geninfo_unexecuted_blocks=1 00:05:20.821 00:05:20.821 ' 00:05:20.821 16:11:09 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.821 --rc genhtml_branch_coverage=1 00:05:20.821 --rc genhtml_function_coverage=1 00:05:20.821 --rc genhtml_legend=1 00:05:20.821 --rc geninfo_all_blocks=1 00:05:20.821 --rc geninfo_unexecuted_blocks=1 00:05:20.821 00:05:20.821 ' 00:05:20.821 16:11:09 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.821 --rc genhtml_branch_coverage=1 00:05:20.821 --rc genhtml_function_coverage=1 00:05:20.821 --rc genhtml_legend=1 00:05:20.821 --rc geninfo_all_blocks=1 00:05:20.821 --rc geninfo_unexecuted_blocks=1 00:05:20.821 00:05:20.821 ' 00:05:20.822 16:11:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.822 16:11:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=777783 00:05:20.822 16:11:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 777783 00:05:20.822 16:11:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.822 16:11:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 777783 ']' 00:05:20.822 16:11:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.822 16:11:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.822 16:11:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.822 16:11:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.822 16:11:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.822 [2024-12-16 16:11:09.255071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:20.822 [2024-12-16 16:11:09.255123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777783 ] 00:05:20.822 [2024-12-16 16:11:09.330613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.822 [2024-12-16 16:11:09.353712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.080 16:11:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.080 16:11:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.080 16:11:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:21.339 16:11:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 777783 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 777783 ']' 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 777783 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777783 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777783' 00:05:21.339 killing process with pid 777783 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 777783 00:05:21.339 16:11:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 777783 00:05:21.598 00:05:21.598 real 0m1.103s 00:05:21.598 user 0m1.134s 00:05:21.598 sys 0m0.408s 00:05:21.598 16:11:10 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.598 16:11:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 ************************************ 00:05:21.598 END TEST alias_rpc 00:05:21.598 ************************************ 00:05:21.598 16:11:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.598 16:11:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.598 16:11:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.598 16:11:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.598 16:11:10 -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 ************************************ 00:05:21.598 START TEST spdkcli_tcp 00:05:21.598 ************************************ 00:05:21.598 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.857 * Looking for test storage... 00:05:21.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:21.857 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.857 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.857 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.857 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.857 16:11:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.857 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.858 --rc genhtml_branch_coverage=1 00:05:21.858 --rc genhtml_function_coverage=1 00:05:21.858 --rc genhtml_legend=1 00:05:21.858 --rc geninfo_all_blocks=1 00:05:21.858 --rc geninfo_unexecuted_blocks=1 00:05:21.858 00:05:21.858 ' 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.858 --rc genhtml_branch_coverage=1 00:05:21.858 --rc genhtml_function_coverage=1 00:05:21.858 --rc genhtml_legend=1 00:05:21.858 --rc geninfo_all_blocks=1 00:05:21.858 --rc geninfo_unexecuted_blocks=1 00:05:21.858 00:05:21.858 ' 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.858 --rc genhtml_branch_coverage=1 00:05:21.858 --rc genhtml_function_coverage=1 00:05:21.858 --rc genhtml_legend=1 00:05:21.858 --rc geninfo_all_blocks=1 00:05:21.858 --rc geninfo_unexecuted_blocks=1 00:05:21.858 00:05:21.858 ' 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.858 --rc genhtml_branch_coverage=1 00:05:21.858 --rc genhtml_function_coverage=1 00:05:21.858 --rc genhtml_legend=1 00:05:21.858 --rc geninfo_all_blocks=1 00:05:21.858 --rc geninfo_unexecuted_blocks=1 00:05:21.858 00:05:21.858 ' 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=777959 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 777959 00:05:21.858 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 777959 ']' 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.858 16:11:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.858 [2024-12-16 16:11:10.437538] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:21.858 [2024-12-16 16:11:10.437586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777959 ] 00:05:22.117 [2024-12-16 16:11:10.513636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.117 [2024-12-16 16:11:10.537969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.117 [2024-12-16 16:11:10.537970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.375 16:11:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.375 16:11:10 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:22.375 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=778075 00:05:22.375 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.375 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.375 [ 00:05:22.375 "bdev_malloc_delete", 00:05:22.375 "bdev_malloc_create", 00:05:22.375 "bdev_null_resize", 00:05:22.375 "bdev_null_delete", 00:05:22.375 "bdev_null_create", 00:05:22.375 "bdev_nvme_cuse_unregister", 00:05:22.375 "bdev_nvme_cuse_register", 00:05:22.375 "bdev_opal_new_user", 00:05:22.375 "bdev_opal_set_lock_state", 00:05:22.375 "bdev_opal_delete", 00:05:22.375 "bdev_opal_get_info", 00:05:22.375 "bdev_opal_create", 00:05:22.375 "bdev_nvme_opal_revert", 00:05:22.376 "bdev_nvme_opal_init", 00:05:22.376 "bdev_nvme_send_cmd", 00:05:22.376 "bdev_nvme_set_keys", 00:05:22.376 "bdev_nvme_get_path_iostat", 00:05:22.376 "bdev_nvme_get_mdns_discovery_info", 00:05:22.376 "bdev_nvme_stop_mdns_discovery", 00:05:22.376 "bdev_nvme_start_mdns_discovery", 00:05:22.376 "bdev_nvme_set_multipath_policy", 00:05:22.376 "bdev_nvme_set_preferred_path", 00:05:22.376 "bdev_nvme_get_io_paths", 00:05:22.376 "bdev_nvme_remove_error_injection", 00:05:22.376 "bdev_nvme_add_error_injection", 00:05:22.376 "bdev_nvme_get_discovery_info", 00:05:22.376 "bdev_nvme_stop_discovery", 00:05:22.376 "bdev_nvme_start_discovery", 00:05:22.376 "bdev_nvme_get_controller_health_info", 00:05:22.376 "bdev_nvme_disable_controller", 00:05:22.376 "bdev_nvme_enable_controller", 00:05:22.376 "bdev_nvme_reset_controller", 00:05:22.376 "bdev_nvme_get_transport_statistics", 00:05:22.376 "bdev_nvme_apply_firmware", 00:05:22.376 "bdev_nvme_detach_controller", 00:05:22.376 "bdev_nvme_get_controllers", 00:05:22.376 "bdev_nvme_attach_controller", 00:05:22.376 "bdev_nvme_set_hotplug", 00:05:22.376 "bdev_nvme_set_options", 00:05:22.376 "bdev_passthru_delete", 00:05:22.376 "bdev_passthru_create", 00:05:22.376 "bdev_lvol_set_parent_bdev", 00:05:22.376 "bdev_lvol_set_parent", 00:05:22.376 "bdev_lvol_check_shallow_copy", 00:05:22.376 "bdev_lvol_start_shallow_copy", 00:05:22.376 "bdev_lvol_grow_lvstore", 00:05:22.376 "bdev_lvol_get_lvols", 00:05:22.376 "bdev_lvol_get_lvstores", 00:05:22.376 "bdev_lvol_delete", 00:05:22.376 "bdev_lvol_set_read_only", 00:05:22.376 "bdev_lvol_resize", 00:05:22.376 "bdev_lvol_decouple_parent", 00:05:22.376 "bdev_lvol_inflate", 00:05:22.376 "bdev_lvol_rename", 00:05:22.376 "bdev_lvol_clone_bdev", 00:05:22.376 "bdev_lvol_clone", 00:05:22.376 "bdev_lvol_snapshot", 00:05:22.376 "bdev_lvol_create", 00:05:22.376 "bdev_lvol_delete_lvstore", 00:05:22.376 "bdev_lvol_rename_lvstore", 00:05:22.376 "bdev_lvol_create_lvstore", 00:05:22.376 "bdev_raid_set_options", 00:05:22.376 "bdev_raid_remove_base_bdev", 00:05:22.376 "bdev_raid_add_base_bdev", 00:05:22.376 "bdev_raid_delete", 00:05:22.376 "bdev_raid_create", 00:05:22.376 "bdev_raid_get_bdevs", 00:05:22.376 "bdev_error_inject_error", 00:05:22.376 "bdev_error_delete", 00:05:22.376 "bdev_error_create", 00:05:22.376 "bdev_split_delete", 00:05:22.376 "bdev_split_create", 00:05:22.376 "bdev_delay_delete", 00:05:22.376 "bdev_delay_create", 00:05:22.376 "bdev_delay_update_latency", 00:05:22.376 "bdev_zone_block_delete", 00:05:22.376 "bdev_zone_block_create", 00:05:22.376 "blobfs_create", 00:05:22.376 "blobfs_detect", 00:05:22.376 "blobfs_set_cache_size", 00:05:22.376 "bdev_aio_delete", 00:05:22.376 "bdev_aio_rescan", 00:05:22.376 "bdev_aio_create", 00:05:22.376 "bdev_ftl_set_property", 00:05:22.376 "bdev_ftl_get_properties", 00:05:22.376 "bdev_ftl_get_stats", 00:05:22.376 "bdev_ftl_unmap", 00:05:22.376 "bdev_ftl_unload", 00:05:22.376 "bdev_ftl_delete", 00:05:22.376 "bdev_ftl_load", 00:05:22.376 "bdev_ftl_create", 00:05:22.376 "bdev_virtio_attach_controller", 00:05:22.376 "bdev_virtio_scsi_get_devices", 00:05:22.376 "bdev_virtio_detach_controller", 00:05:22.376 "bdev_virtio_blk_set_hotplug", 00:05:22.376 "bdev_iscsi_delete", 00:05:22.376 "bdev_iscsi_create", 00:05:22.376 "bdev_iscsi_set_options", 00:05:22.376 "accel_error_inject_error", 00:05:22.376 "ioat_scan_accel_module", 00:05:22.376 "dsa_scan_accel_module", 00:05:22.376 "iaa_scan_accel_module", 00:05:22.376 "vfu_virtio_create_fs_endpoint", 00:05:22.376 "vfu_virtio_create_scsi_endpoint", 00:05:22.376 "vfu_virtio_scsi_remove_target", 00:05:22.376 "vfu_virtio_scsi_add_target", 00:05:22.376 "vfu_virtio_create_blk_endpoint", 00:05:22.376 "vfu_virtio_delete_endpoint", 00:05:22.376 "keyring_file_remove_key", 00:05:22.376 "keyring_file_add_key", 00:05:22.376 "keyring_linux_set_options", 00:05:22.376 "fsdev_aio_delete", 00:05:22.376 "fsdev_aio_create", 00:05:22.376 "iscsi_get_histogram", 00:05:22.376 "iscsi_enable_histogram", 00:05:22.376 "iscsi_set_options", 00:05:22.376 "iscsi_get_auth_groups", 00:05:22.376 "iscsi_auth_group_remove_secret", 00:05:22.376 "iscsi_auth_group_add_secret", 00:05:22.376 "iscsi_delete_auth_group", 00:05:22.376 "iscsi_create_auth_group", 00:05:22.376 "iscsi_set_discovery_auth", 00:05:22.376 "iscsi_get_options", 00:05:22.376 "iscsi_target_node_request_logout", 00:05:22.376 "iscsi_target_node_set_redirect", 00:05:22.376 "iscsi_target_node_set_auth", 00:05:22.376 "iscsi_target_node_add_lun", 00:05:22.376 "iscsi_get_stats", 00:05:22.376 "iscsi_get_connections", 00:05:22.376 "iscsi_portal_group_set_auth", 00:05:22.376 "iscsi_start_portal_group", 00:05:22.376 "iscsi_delete_portal_group", 00:05:22.376 "iscsi_create_portal_group", 00:05:22.376 "iscsi_get_portal_groups", 00:05:22.376 "iscsi_delete_target_node", 00:05:22.376 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.376 "iscsi_target_node_add_pg_ig_maps", 00:05:22.376 "iscsi_create_target_node", 00:05:22.376 "iscsi_get_target_nodes", 00:05:22.376 "iscsi_delete_initiator_group", 00:05:22.376 "iscsi_initiator_group_remove_initiators", 00:05:22.376 "iscsi_initiator_group_add_initiators", 00:05:22.376 "iscsi_create_initiator_group", 00:05:22.376 "iscsi_get_initiator_groups", 00:05:22.376 "nvmf_set_crdt", 00:05:22.376 "nvmf_set_config", 00:05:22.376 "nvmf_set_max_subsystems", 00:05:22.376 "nvmf_stop_mdns_prr", 00:05:22.376 "nvmf_publish_mdns_prr", 00:05:22.376 "nvmf_subsystem_get_listeners", 00:05:22.376 "nvmf_subsystem_get_qpairs", 00:05:22.376 "nvmf_subsystem_get_controllers", 00:05:22.376 "nvmf_get_stats", 00:05:22.376 "nvmf_get_transports", 00:05:22.376 "nvmf_create_transport", 00:05:22.376 "nvmf_get_targets", 00:05:22.376 "nvmf_delete_target", 00:05:22.376 "nvmf_create_target", 00:05:22.376 "nvmf_subsystem_allow_any_host", 00:05:22.376 "nvmf_subsystem_set_keys", 00:05:22.376 "nvmf_subsystem_remove_host", 00:05:22.376 "nvmf_subsystem_add_host", 00:05:22.376 "nvmf_ns_remove_host", 00:05:22.376 "nvmf_ns_add_host", 00:05:22.376 "nvmf_subsystem_remove_ns", 00:05:22.376 "nvmf_subsystem_set_ns_ana_group", 00:05:22.376 "nvmf_subsystem_add_ns", 00:05:22.376 "nvmf_subsystem_listener_set_ana_state", 00:05:22.376 "nvmf_discovery_get_referrals", 00:05:22.376 "nvmf_discovery_remove_referral", 00:05:22.376 "nvmf_discovery_add_referral", 00:05:22.376 "nvmf_subsystem_remove_listener", 00:05:22.376 "nvmf_subsystem_add_listener", 00:05:22.376 "nvmf_delete_subsystem", 00:05:22.376 "nvmf_create_subsystem", 00:05:22.376 "nvmf_get_subsystems", 00:05:22.376 "env_dpdk_get_mem_stats", 00:05:22.376 "nbd_get_disks", 00:05:22.376 "nbd_stop_disk", 00:05:22.376 "nbd_start_disk", 00:05:22.376 "ublk_recover_disk", 00:05:22.376 "ublk_get_disks", 00:05:22.376 "ublk_stop_disk", 00:05:22.376 "ublk_start_disk", 00:05:22.376 "ublk_destroy_target", 00:05:22.376 "ublk_create_target", 00:05:22.376 "virtio_blk_create_transport", 00:05:22.376 "virtio_blk_get_transports", 00:05:22.376 "vhost_controller_set_coalescing", 00:05:22.376 "vhost_get_controllers", 00:05:22.376 "vhost_delete_controller", 00:05:22.376 "vhost_create_blk_controller", 00:05:22.376 "vhost_scsi_controller_remove_target", 00:05:22.376 "vhost_scsi_controller_add_target", 00:05:22.376 "vhost_start_scsi_controller", 00:05:22.376 "vhost_create_scsi_controller", 00:05:22.376 "thread_set_cpumask", 00:05:22.376 "scheduler_set_options", 00:05:22.376 "framework_get_governor", 00:05:22.376 "framework_get_scheduler", 00:05:22.376 "framework_set_scheduler", 00:05:22.376 "framework_get_reactors", 00:05:22.376 "thread_get_io_channels", 00:05:22.376 "thread_get_pollers", 00:05:22.376 "thread_get_stats", 00:05:22.376 "framework_monitor_context_switch", 00:05:22.376 "spdk_kill_instance", 00:05:22.376 "log_enable_timestamps", 00:05:22.376 "log_get_flags", 00:05:22.376 "log_clear_flag", 00:05:22.376 "log_set_flag", 00:05:22.376 "log_get_level", 00:05:22.376 "log_set_level", 00:05:22.376 "log_get_print_level", 00:05:22.376 "log_set_print_level", 00:05:22.376 "framework_enable_cpumask_locks", 00:05:22.376 "framework_disable_cpumask_locks", 00:05:22.376 "framework_wait_init", 00:05:22.376 "framework_start_init", 00:05:22.376 "scsi_get_devices", 00:05:22.376 "bdev_get_histogram", 00:05:22.376 "bdev_enable_histogram", 00:05:22.376 "bdev_set_qos_limit", 00:05:22.376 "bdev_set_qd_sampling_period", 00:05:22.376 "bdev_get_bdevs", 00:05:22.376 "bdev_reset_iostat", 00:05:22.376 "bdev_get_iostat", 00:05:22.376 "bdev_examine", 00:05:22.376 "bdev_wait_for_examine", 00:05:22.376 "bdev_set_options", 00:05:22.376 "accel_get_stats", 00:05:22.376 "accel_set_options", 00:05:22.376 "accel_set_driver", 00:05:22.376 "accel_crypto_key_destroy", 00:05:22.376 "accel_crypto_keys_get", 00:05:22.376 "accel_crypto_key_create", 00:05:22.376 "accel_assign_opc", 00:05:22.376 "accel_get_module_info", 00:05:22.376 "accel_get_opc_assignments", 00:05:22.376 "vmd_rescan", 00:05:22.376 "vmd_remove_device", 00:05:22.376 "vmd_enable", 00:05:22.376 "sock_get_default_impl", 00:05:22.376 "sock_set_default_impl", 00:05:22.376 "sock_impl_set_options", 00:05:22.376 "sock_impl_get_options", 00:05:22.376 "iobuf_get_stats", 00:05:22.376 "iobuf_set_options", 00:05:22.376 "keyring_get_keys", 00:05:22.376 "vfu_tgt_set_base_path", 00:05:22.376 "framework_get_pci_devices", 00:05:22.376 "framework_get_config", 00:05:22.376 "framework_get_subsystems", 00:05:22.376 "fsdev_set_opts", 00:05:22.376 "fsdev_get_opts", 00:05:22.376 "trace_get_info", 00:05:22.376 "trace_get_tpoint_group_mask", 00:05:22.376 "trace_disable_tpoint_group", 00:05:22.376 "trace_enable_tpoint_group", 00:05:22.377 "trace_clear_tpoint_mask", 00:05:22.377 "trace_set_tpoint_mask", 00:05:22.377 "notify_get_notifications", 00:05:22.377 "notify_get_types", 00:05:22.377 "spdk_get_version", 00:05:22.377 "rpc_get_methods" 00:05:22.377 ] 00:05:22.377 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.377 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.377 16:11:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 777959 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 777959 ']' 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 777959 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.377 16:11:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777959 00:05:22.635 16:11:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.635 16:11:11 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.636 16:11:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777959' 00:05:22.636 killing process with pid 777959 00:05:22.636 16:11:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 777959 00:05:22.636 16:11:11 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 777959 00:05:22.895 00:05:22.895 real 0m1.099s 00:05:22.895 user 0m1.844s 00:05:22.895 sys 0m0.442s 00:05:22.895 16:11:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.895 16:11:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.895 ************************************ 00:05:22.895 END TEST spdkcli_tcp 00:05:22.895 ************************************ 00:05:22.895 16:11:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.895 16:11:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.895 16:11:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.895 16:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.895 ************************************ 00:05:22.895 START TEST dpdk_mem_utility 00:05:22.895 ************************************ 00:05:22.895 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.895 * Looking for test storage... 00:05:22.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:22.895 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.895 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.895 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.154 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.154 16:11:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:23.154 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.154 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.154 --rc genhtml_branch_coverage=1 00:05:23.154 --rc genhtml_function_coverage=1 00:05:23.155 --rc genhtml_legend=1 00:05:23.155 --rc geninfo_all_blocks=1 00:05:23.155 --rc geninfo_unexecuted_blocks=1 00:05:23.155 00:05:23.155 ' 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.155 --rc genhtml_branch_coverage=1 00:05:23.155 --rc genhtml_function_coverage=1 00:05:23.155 --rc genhtml_legend=1 00:05:23.155 --rc geninfo_all_blocks=1 00:05:23.155 --rc geninfo_unexecuted_blocks=1 00:05:23.155 00:05:23.155 ' 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.155 --rc genhtml_branch_coverage=1 00:05:23.155 --rc genhtml_function_coverage=1 00:05:23.155 --rc genhtml_legend=1 00:05:23.155 --rc geninfo_all_blocks=1 00:05:23.155 --rc geninfo_unexecuted_blocks=1 00:05:23.155 00:05:23.155 ' 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.155 --rc genhtml_branch_coverage=1 00:05:23.155 --rc genhtml_function_coverage=1 00:05:23.155 --rc genhtml_legend=1 00:05:23.155 --rc geninfo_all_blocks=1 00:05:23.155 --rc geninfo_unexecuted_blocks=1 00:05:23.155 00:05:23.155 ' 00:05:23.155 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.155 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=778157 00:05:23.155 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 778157 00:05:23.155 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 778157 ']' 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.155 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.155 [2024-12-16 16:11:11.590229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:23.155 [2024-12-16 16:11:11.590276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778157 ] 00:05:23.155 [2024-12-16 16:11:11.667249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.155 [2024-12-16 16:11:11.690103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.415 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.415 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:23.415 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:23.415 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:23.415 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.415 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.415 { 00:05:23.415 "filename": "/tmp/spdk_mem_dump.txt" 00:05:23.415 } 00:05:23.415 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.415 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.415 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:23.415 1 heaps totaling size 818.000000 MiB 00:05:23.415 size: 818.000000 MiB heap id: 0 00:05:23.415 end heaps---------- 00:05:23.415 9 mempools totaling size 603.782043 MiB 00:05:23.415 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.415 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.415 size: 100.555481 MiB name: bdev_io_778157 00:05:23.415 size: 50.003479 MiB name: msgpool_778157 00:05:23.415 size: 36.509338 MiB name: fsdev_io_778157 00:05:23.415 size: 21.763794 MiB name: PDU_Pool 00:05:23.415 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.415 size: 4.133484 MiB name: evtpool_778157 00:05:23.415 size: 0.026123 MiB name: Session_Pool 00:05:23.415 end mempools------- 00:05:23.415 6 memzones totaling size 4.142822 MiB 00:05:23.415 size: 1.000366 MiB name: RG_ring_0_778157 00:05:23.415 size: 1.000366 MiB name: RG_ring_1_778157 00:05:23.415 size: 1.000366 MiB name: RG_ring_4_778157 00:05:23.415 size: 1.000366 MiB name: RG_ring_5_778157 00:05:23.415 size: 0.125366 MiB name: RG_ring_2_778157 00:05:23.415 size: 0.015991 MiB name: RG_ring_3_778157 00:05:23.415 end memzones------- 00:05:23.415 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.415 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:23.415 list of free elements. size: 10.852478 MiB 00:05:23.415 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:23.415 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:23.415 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:23.415 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:23.415 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:23.415 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:23.415 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:23.415 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:23.415 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:23.415 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:23.415 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:23.415 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:23.415 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:23.415 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:23.415 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:23.415 list of standard malloc elements. size: 199.218628 MiB 00:05:23.415 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:23.415 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:23.415 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:23.415 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:23.415 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:23.415 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:23.415 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:23.415 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:23.415 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:23.415 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:23.415 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:23.415 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:23.415 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:23.415 list of memzone associated elements. size: 607.928894 MiB 00:05:23.415 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:23.415 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.415 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:23.415 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.415 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:23.415 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_778157_0 00:05:23.415 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:23.415 associated memzone info: size: 48.002930 MiB name: MP_msgpool_778157_0 00:05:23.415 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:23.415 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_778157_0 00:05:23.415 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:23.415 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.415 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:23.415 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.415 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:23.415 associated memzone info: size: 3.000122 MiB name: MP_evtpool_778157_0 00:05:23.415 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:23.415 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_778157 00:05:23.415 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:23.415 associated memzone info: size: 1.007996 MiB name: MP_evtpool_778157 00:05:23.415 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:23.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.415 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:23.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.415 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:23.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.415 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:23.415 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.415 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:23.415 associated memzone info: size: 1.000366 MiB name: RG_ring_0_778157 00:05:23.415 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:23.415 associated memzone info: size: 1.000366 MiB name: RG_ring_1_778157 00:05:23.416 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:23.416 associated memzone info: size: 1.000366 MiB name: RG_ring_4_778157 00:05:23.416 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:23.416 associated memzone info: size: 1.000366 MiB name: RG_ring_5_778157 00:05:23.416 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:23.416 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_778157 00:05:23.416 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:23.416 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_778157 00:05:23.416 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:23.416 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.416 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:23.416 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.416 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:23.416 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.416 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:23.416 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_778157 00:05:23.416 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:23.416 associated memzone info: size: 0.125366 MiB name: RG_ring_2_778157 00:05:23.416 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:23.416 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.416 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:23.416 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.416 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:23.416 associated memzone info: size: 0.015991 MiB name: RG_ring_3_778157 00:05:23.416 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:23.416 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.416 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:23.416 associated memzone info: size: 0.000183 MiB name: MP_msgpool_778157 00:05:23.416 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:23.416 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_778157 00:05:23.416 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:23.416 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_778157 00:05:23.416 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:23.416 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.416 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.416 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 778157 00:05:23.416 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 778157 ']' 00:05:23.416 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 778157 00:05:23.416 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:23.416 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.416 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778157 00:05:23.675 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.675 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.675 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778157' 00:05:23.675 killing process with pid 778157 00:05:23.675 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 778157 00:05:23.675 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 778157 00:05:23.934 00:05:23.934 real 0m0.966s 00:05:23.934 user 0m0.888s 00:05:23.934 sys 0m0.410s 00:05:23.934 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.934 16:11:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.934 ************************************ 00:05:23.934 END TEST dpdk_mem_utility 00:05:23.934 ************************************ 00:05:23.934 16:11:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.934 16:11:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.934 16:11:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.934 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:23.934 ************************************ 00:05:23.934 START TEST event 00:05:23.934 ************************************ 00:05:23.934 16:11:12 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.934 * Looking for test storage... 00:05:23.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:23.934 16:11:12 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.934 16:11:12 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.934 16:11:12 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.193 16:11:12 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.193 16:11:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.193 16:11:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.193 16:11:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.193 16:11:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.193 16:11:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.193 16:11:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.193 16:11:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.193 16:11:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.193 16:11:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.193 16:11:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.194 16:11:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.194 16:11:12 event -- scripts/common.sh@344 -- # case "$op" in 00:05:24.194 16:11:12 event -- scripts/common.sh@345 -- # : 1 00:05:24.194 16:11:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.194 16:11:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.194 16:11:12 event -- scripts/common.sh@365 -- # decimal 1 00:05:24.194 16:11:12 event -- scripts/common.sh@353 -- # local d=1 00:05:24.194 16:11:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.194 16:11:12 event -- scripts/common.sh@355 -- # echo 1 00:05:24.194 16:11:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.194 16:11:12 event -- scripts/common.sh@366 -- # decimal 2 00:05:24.194 16:11:12 event -- scripts/common.sh@353 -- # local d=2 00:05:24.194 16:11:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.194 16:11:12 event -- scripts/common.sh@355 -- # echo 2 00:05:24.194 16:11:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.194 16:11:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.194 16:11:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.194 16:11:12 event -- scripts/common.sh@368 -- # return 0 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.194 --rc genhtml_branch_coverage=1 00:05:24.194 --rc genhtml_function_coverage=1 00:05:24.194 --rc genhtml_legend=1 00:05:24.194 --rc geninfo_all_blocks=1 00:05:24.194 --rc geninfo_unexecuted_blocks=1 00:05:24.194 00:05:24.194 ' 00:05:24.194 16:11:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:24.194 16:11:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:24.194 16:11:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:24.194 16:11:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.194 16:11:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.194 ************************************ 00:05:24.194 START TEST event_perf 00:05:24.194 ************************************ 00:05:24.194 16:11:12 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.194 Running I/O for 1 seconds...[2024-12-16 16:11:12.613248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:24.194 [2024-12-16 16:11:12.613315] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778441 ] 00:05:24.194 [2024-12-16 16:11:12.690872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.194 [2024-12-16 16:11:12.716686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.194 [2024-12-16 16:11:12.716806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.194 [2024-12-16 16:11:12.716914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.194 Running I/O for 1 seconds...[2024-12-16 16:11:12.716916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.570 00:05:25.570 lcore 0: 204146 00:05:25.570 lcore 1: 204145 00:05:25.570 lcore 2: 204146 00:05:25.570 lcore 3: 204146 00:05:25.570 done. 00:05:25.570 00:05:25.570 real 0m1.160s 00:05:25.570 user 0m4.084s 00:05:25.570 sys 0m0.073s 00:05:25.570 16:11:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.570 16:11:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.570 ************************************ 00:05:25.570 END TEST event_perf 00:05:25.570 ************************************ 00:05:25.570 16:11:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.570 16:11:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:25.570 16:11:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.570 16:11:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.570 ************************************ 00:05:25.570 START TEST event_reactor 00:05:25.570 ************************************ 00:05:25.570 16:11:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.570 [2024-12-16 16:11:13.846039] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:25.570 [2024-12-16 16:11:13.846117] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778704 ] 00:05:25.570 [2024-12-16 16:11:13.924246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.570 [2024-12-16 16:11:13.945701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.507 test_start 00:05:26.507 oneshot 00:05:26.507 tick 100 00:05:26.507 tick 100 00:05:26.507 tick 250 00:05:26.507 tick 100 00:05:26.507 tick 100 00:05:26.507 tick 100 00:05:26.507 tick 250 00:05:26.507 tick 500 00:05:26.507 tick 100 00:05:26.507 tick 100 00:05:26.507 tick 250 00:05:26.507 tick 100 00:05:26.507 tick 100 00:05:26.507 test_end 00:05:26.507 00:05:26.507 real 0m1.151s 00:05:26.507 user 0m1.070s 00:05:26.507 sys 0m0.077s 00:05:26.507 16:11:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.507 16:11:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.507 ************************************ 00:05:26.507 END TEST event_reactor 00:05:26.507 ************************************ 00:05:26.507 16:11:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.507 16:11:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:26.507 16:11:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.507 16:11:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.507 ************************************ 00:05:26.507 START TEST event_reactor_perf 00:05:26.507 ************************************ 00:05:26.507 16:11:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.507 [2024-12-16 16:11:15.069037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:26.507 [2024-12-16 16:11:15.069112] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778950 ] 00:05:26.765 [2024-12-16 16:11:15.147553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.765 [2024-12-16 16:11:15.168941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.700 test_start 00:05:27.700 test_end 00:05:27.700 Performance: 515570 events per second 00:05:27.700 00:05:27.700 real 0m1.153s 00:05:27.700 user 0m1.077s 00:05:27.700 sys 0m0.072s 00:05:27.700 16:11:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.700 16:11:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.700 ************************************ 00:05:27.700 END TEST event_reactor_perf 00:05:27.700 ************************************ 00:05:27.700 16:11:16 event -- event/event.sh@49 -- # uname -s 00:05:27.700 16:11:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.700 16:11:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.700 16:11:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.700 16:11:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.700 16:11:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.700 ************************************ 00:05:27.700 START TEST event_scheduler 00:05:27.700 ************************************ 00:05:27.700 16:11:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.959 * Looking for test storage... 00:05:27.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:27.959 16:11:16 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.959 16:11:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.959 16:11:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.959 16:11:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.959 16:11:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.960 16:11:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.960 --rc genhtml_branch_coverage=1 00:05:27.960 --rc genhtml_function_coverage=1 00:05:27.960 --rc genhtml_legend=1 00:05:27.960 --rc geninfo_all_blocks=1 00:05:27.960 --rc geninfo_unexecuted_blocks=1 00:05:27.960 00:05:27.960 ' 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.960 --rc genhtml_branch_coverage=1 00:05:27.960 --rc genhtml_function_coverage=1 00:05:27.960 --rc genhtml_legend=1 00:05:27.960 --rc geninfo_all_blocks=1 00:05:27.960 --rc geninfo_unexecuted_blocks=1 00:05:27.960 00:05:27.960 ' 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.960 --rc genhtml_branch_coverage=1 00:05:27.960 --rc genhtml_function_coverage=1 00:05:27.960 --rc genhtml_legend=1 00:05:27.960 --rc geninfo_all_blocks=1 00:05:27.960 --rc geninfo_unexecuted_blocks=1 00:05:27.960 00:05:27.960 ' 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.960 --rc genhtml_branch_coverage=1 00:05:27.960 --rc genhtml_function_coverage=1 00:05:27.960 --rc genhtml_legend=1 00:05:27.960 --rc geninfo_all_blocks=1 00:05:27.960 --rc geninfo_unexecuted_blocks=1 00:05:27.960 00:05:27.960 ' 00:05:27.960 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.960 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=779231 00:05:27.960 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.960 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.960 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 779231 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 779231 ']' 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.960 16:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.960 [2024-12-16 16:11:16.499633] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:27.960 [2024-12-16 16:11:16.499679] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779231 ] 00:05:28.219 [2024-12-16 16:11:16.569258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.219 [2024-12-16 16:11:16.594997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.219 [2024-12-16 16:11:16.595129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.219 [2024-12-16 16:11:16.595184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.219 [2024-12-16 16:11:16.595185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:28.219 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.219 [2024-12-16 16:11:16.663862] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:28.219 [2024-12-16 16:11:16.663879] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:28.219 [2024-12-16 16:11:16.663887] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.219 [2024-12-16 16:11:16.663893] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.219 [2024-12-16 16:11:16.663898] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.219 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.219 [2024-12-16 16:11:16.734014] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.219 16:11:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.219 16:11:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 ************************************ 00:05:28.220 START TEST scheduler_create_thread 00:05:28.220 ************************************ 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 2 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 3 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 4 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 5 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 6 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.220 7 00:05:28.220 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.479 8 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.479 9 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.479 10 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.479 16:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.046 16:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.046 16:11:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.046 16:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.046 16:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.423 16:11:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.423 16:11:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.423 16:11:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.423 16:11:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.423 16:11:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 16:11:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 00:05:31.359 real 0m3.101s 00:05:31.359 user 0m0.026s 00:05:31.359 sys 0m0.004s 00:05:31.359 16:11:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.359 16:11:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 ************************************ 00:05:31.359 END TEST scheduler_create_thread 00:05:31.359 ************************************ 00:05:31.359 16:11:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.359 16:11:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 779231 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 779231 ']' 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 779231 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779231 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779231' 00:05:31.359 killing process with pid 779231 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 779231 00:05:31.359 16:11:19 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 779231 00:05:31.926 [2024-12-16 16:11:20.249122] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.926 00:05:31.926 real 0m4.153s 00:05:31.926 user 0m6.723s 00:05:31.926 sys 0m0.368s 00:05:31.926 16:11:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.926 16:11:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.926 ************************************ 00:05:31.926 END TEST event_scheduler 00:05:31.926 ************************************ 00:05:31.926 16:11:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.926 16:11:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.926 16:11:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.926 16:11:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.926 16:11:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.926 ************************************ 00:05:31.926 START TEST app_repeat 00:05:31.926 ************************************ 00:05:31.926 16:11:20 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:31.926 16:11:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=779951 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 779951' 00:05:31.927 Process app_repeat pid: 779951 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.927 spdk_app_start Round 0 00:05:31.927 16:11:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 779951 /var/tmp/spdk-nbd.sock 00:05:31.927 16:11:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 779951 ']' 00:05:31.927 16:11:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.927 16:11:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.927 16:11:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.927 16:11:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.927 16:11:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.185 [2024-12-16 16:11:20.542357] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:32.185 [2024-12-16 16:11:20.542421] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779951 ] 00:05:32.185 [2024-12-16 16:11:20.619344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.185 [2024-12-16 16:11:20.643999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.185 [2024-12-16 16:11:20.644002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.185 16:11:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.185 16:11:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.185 16:11:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.444 Malloc0 00:05:32.444 16:11:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.702 Malloc1 00:05:32.702 16:11:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.702 16:11:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.961 /dev/nbd0 00:05:32.961 16:11:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.961 16:11:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.961 1+0 records in 00:05:32.961 1+0 records out 00:05:32.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226274 s, 18.1 MB/s 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.961 16:11:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.961 16:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.961 16:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.961 16:11:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.220 /dev/nbd1 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.220 1+0 records in 00:05:33.220 1+0 records out 00:05:33.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218387 s, 18.8 MB/s 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.220 16:11:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.220 16:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.479 { 00:05:33.479 "nbd_device": "/dev/nbd0", 00:05:33.479 "bdev_name": "Malloc0" 00:05:33.479 }, 00:05:33.479 { 00:05:33.479 "nbd_device": "/dev/nbd1", 00:05:33.479 "bdev_name": "Malloc1" 00:05:33.479 } 00:05:33.479 ]' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.479 { 00:05:33.479 "nbd_device": "/dev/nbd0", 00:05:33.479 "bdev_name": "Malloc0" 00:05:33.479 }, 00:05:33.479 { 00:05:33.479 "nbd_device": "/dev/nbd1", 00:05:33.479 "bdev_name": "Malloc1" 00:05:33.479 } 00:05:33.479 ]' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.479 /dev/nbd1' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.479 /dev/nbd1' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.479 256+0 records in 00:05:33.479 256+0 records out 00:05:33.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107121 s, 97.9 MB/s 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.479 256+0 records in 00:05:33.479 256+0 records out 00:05:33.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145331 s, 72.2 MB/s 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.479 256+0 records in 00:05:33.479 256+0 records out 00:05:33.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146281 s, 71.7 MB/s 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.479 16:11:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.738 16:11:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.996 16:11:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.996 16:11:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.996 16:11:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.996 16:11:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.996 16:11:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.996 16:11:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.997 16:11:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.997 16:11:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.997 16:11:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.997 16:11:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.997 16:11:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.255 16:11:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.255 16:11:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.514 16:11:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.514 [2024-12-16 16:11:23.016228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.514 [2024-12-16 16:11:23.036245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.514 [2024-12-16 16:11:23.036245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.514 [2024-12-16 16:11:23.076427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.514 [2024-12-16 16:11:23.076466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.797 16:11:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.797 16:11:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.797 spdk_app_start Round 1 00:05:37.797 16:11:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 779951 /var/tmp/spdk-nbd.sock 00:05:37.797 16:11:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 779951 ']' 00:05:37.797 16:11:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.797 16:11:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.797 16:11:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.797 16:11:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.797 16:11:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.797 16:11:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.797 16:11:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.797 16:11:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.797 Malloc0 00:05:37.797 16:11:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.055 Malloc1 00:05:38.055 16:11:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.055 16:11:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.314 /dev/nbd0 00:05:38.314 16:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.314 16:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.314 1+0 records in 00:05:38.314 1+0 records out 00:05:38.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211892 s, 19.3 MB/s 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.314 16:11:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.314 16:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.314 16:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.314 16:11:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.572 /dev/nbd1 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.572 1+0 records in 00:05:38.572 1+0 records out 00:05:38.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232052 s, 17.7 MB/s 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.572 16:11:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.572 16:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.831 { 00:05:38.831 "nbd_device": "/dev/nbd0", 00:05:38.831 "bdev_name": "Malloc0" 00:05:38.831 }, 00:05:38.831 { 00:05:38.831 "nbd_device": "/dev/nbd1", 00:05:38.831 "bdev_name": "Malloc1" 00:05:38.831 } 00:05:38.831 ]' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.831 { 00:05:38.831 "nbd_device": "/dev/nbd0", 00:05:38.831 "bdev_name": "Malloc0" 00:05:38.831 }, 00:05:38.831 { 00:05:38.831 "nbd_device": "/dev/nbd1", 00:05:38.831 "bdev_name": "Malloc1" 00:05:38.831 } 00:05:38.831 ]' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.831 /dev/nbd1' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.831 /dev/nbd1' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.831 256+0 records in 00:05:38.831 256+0 records out 00:05:38.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106548 s, 98.4 MB/s 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.831 256+0 records in 00:05:38.831 256+0 records out 00:05:38.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141957 s, 73.9 MB/s 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.831 256+0 records in 00:05:38.831 256+0 records out 00:05:38.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144595 s, 72.5 MB/s 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.831 16:11:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.090 16:11:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.348 16:11:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.348 16:11:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.349 16:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.608 16:11:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.608 16:11:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.608 16:11:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.867 [2024-12-16 16:11:28.335355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.867 [2024-12-16 16:11:28.355337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.867 [2024-12-16 16:11:28.355338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.867 [2024-12-16 16:11:28.396365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.867 [2024-12-16 16:11:28.396405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.154 16:11:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.154 16:11:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.154 spdk_app_start Round 2 00:05:43.154 16:11:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 779951 /var/tmp/spdk-nbd.sock 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 779951 ']' 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.154 16:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.154 16:11:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.154 Malloc0 00:05:43.154 16:11:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.413 Malloc1 00:05:43.413 16:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.413 16:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.672 /dev/nbd0 00:05:43.672 16:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.672 16:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.672 1+0 records in 00:05:43.672 1+0 records out 00:05:43.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261826 s, 15.6 MB/s 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.672 16:11:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.672 16:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.672 16:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.672 16:11:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.672 /dev/nbd1 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.931 1+0 records in 00:05:43.931 1+0 records out 00:05:43.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199276 s, 20.6 MB/s 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.931 16:11:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.931 { 00:05:43.931 "nbd_device": "/dev/nbd0", 00:05:43.931 "bdev_name": "Malloc0" 00:05:43.931 }, 00:05:43.931 { 00:05:43.931 "nbd_device": "/dev/nbd1", 00:05:43.931 "bdev_name": "Malloc1" 00:05:43.931 } 00:05:43.931 ]' 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.931 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.931 { 00:05:43.931 "nbd_device": "/dev/nbd0", 00:05:43.931 "bdev_name": "Malloc0" 00:05:43.931 }, 00:05:43.931 { 00:05:43.931 "nbd_device": "/dev/nbd1", 00:05:43.931 "bdev_name": "Malloc1" 00:05:43.931 } 00:05:43.931 ]' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.190 /dev/nbd1' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.190 /dev/nbd1' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.190 256+0 records in 00:05:44.190 256+0 records out 00:05:44.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00976453 s, 107 MB/s 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.190 256+0 records in 00:05:44.190 256+0 records out 00:05:44.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142029 s, 73.8 MB/s 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.190 256+0 records in 00:05:44.190 256+0 records out 00:05:44.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143757 s, 72.9 MB/s 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.190 16:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.449 16:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.449 16:11:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.449 16:11:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.449 16:11:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.449 16:11:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.449 16:11:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.449 16:11:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.708 16:11:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.708 16:11:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.967 16:11:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.226 [2024-12-16 16:11:33.663990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.226 [2024-12-16 16:11:33.683778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.226 [2024-12-16 16:11:33.683780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.226 [2024-12-16 16:11:33.724842] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.226 [2024-12-16 16:11:33.724880] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.512 16:11:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 779951 /var/tmp/spdk-nbd.sock 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 779951 ']' 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.512 16:11:36 event.app_repeat -- event/event.sh@39 -- # killprocess 779951 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 779951 ']' 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 779951 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779951 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.512 16:11:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779951' 00:05:48.513 killing process with pid 779951 00:05:48.513 16:11:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 779951 00:05:48.513 16:11:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 779951 00:05:48.513 spdk_app_start is called in Round 0. 00:05:48.513 Shutdown signal received, stop current app iteration 00:05:48.513 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:48.513 spdk_app_start is called in Round 1. 00:05:48.513 Shutdown signal received, stop current app iteration 00:05:48.513 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:48.513 spdk_app_start is called in Round 2. 00:05:48.513 Shutdown signal received, stop current app iteration 00:05:48.513 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:48.513 spdk_app_start is called in Round 3. 00:05:48.513 Shutdown signal received, stop current app iteration 00:05:48.513 16:11:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.513 16:11:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.513 00:05:48.513 real 0m16.405s 00:05:48.513 user 0m36.173s 00:05:48.513 sys 0m2.526s 00:05:48.513 16:11:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.513 16:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.513 ************************************ 00:05:48.513 END TEST app_repeat 00:05:48.513 ************************************ 00:05:48.513 16:11:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.513 16:11:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.513 16:11:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.513 16:11:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.513 16:11:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.513 ************************************ 00:05:48.513 START TEST cpu_locks 00:05:48.513 ************************************ 00:05:48.513 16:11:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.513 * Looking for test storage... 00:05:48.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.513 16:11:37 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.513 16:11:37 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.513 16:11:37 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.772 16:11:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.772 --rc genhtml_branch_coverage=1 00:05:48.772 --rc genhtml_function_coverage=1 00:05:48.772 --rc genhtml_legend=1 00:05:48.772 --rc geninfo_all_blocks=1 00:05:48.772 --rc geninfo_unexecuted_blocks=1 00:05:48.772 00:05:48.772 ' 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.772 --rc genhtml_branch_coverage=1 00:05:48.772 --rc genhtml_function_coverage=1 00:05:48.772 --rc genhtml_legend=1 00:05:48.772 --rc geninfo_all_blocks=1 00:05:48.772 --rc geninfo_unexecuted_blocks=1 00:05:48.772 00:05:48.772 ' 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.772 --rc genhtml_branch_coverage=1 00:05:48.772 --rc genhtml_function_coverage=1 00:05:48.772 --rc genhtml_legend=1 00:05:48.772 --rc geninfo_all_blocks=1 00:05:48.772 --rc geninfo_unexecuted_blocks=1 00:05:48.772 00:05:48.772 ' 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.772 --rc genhtml_branch_coverage=1 00:05:48.772 --rc genhtml_function_coverage=1 00:05:48.772 --rc genhtml_legend=1 00:05:48.772 --rc geninfo_all_blocks=1 00:05:48.772 --rc geninfo_unexecuted_blocks=1 00:05:48.772 00:05:48.772 ' 00:05:48.772 16:11:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.772 16:11:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.772 16:11:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.772 16:11:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.772 16:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.772 ************************************ 00:05:48.772 START TEST default_locks 00:05:48.772 ************************************ 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=782881 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 782881 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 782881 ']' 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.772 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.772 [2024-12-16 16:11:37.244568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:48.772 [2024-12-16 16:11:37.244611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782881 ] 00:05:48.772 [2024-12-16 16:11:37.318865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.772 [2024-12-16 16:11:37.341016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.031 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.031 16:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:49.031 16:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 782881 00:05:49.031 16:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 782881 00:05:49.031 16:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.597 lslocks: write error 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 782881 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 782881 ']' 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 782881 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782881 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.597 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782881' 00:05:49.597 killing process with pid 782881 00:05:49.598 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 782881 00:05:49.598 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 782881 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 782881 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 782881 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 782881 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 782881 ']' 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (782881) - No such process 00:05:49.857 ERROR: process (pid: 782881) is no longer running 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.857 00:05:49.857 real 0m1.193s 00:05:49.857 user 0m1.141s 00:05:49.857 sys 0m0.553s 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.857 16:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.857 ************************************ 00:05:49.857 END TEST default_locks 00:05:49.857 ************************************ 00:05:49.857 16:11:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.857 16:11:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.857 16:11:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.857 16:11:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.857 ************************************ 00:05:49.857 START TEST default_locks_via_rpc 00:05:49.857 ************************************ 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=783131 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 783131 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 783131 ']' 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.857 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.117 [2024-12-16 16:11:38.505756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:50.117 [2024-12-16 16:11:38.505799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783131 ] 00:05:50.117 [2024-12-16 16:11:38.581420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.117 [2024-12-16 16:11:38.603953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 783131 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 783131 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 783131 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 783131 ']' 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 783131 00:05:50.376 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.635 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.635 16:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783131 00:05:50.635 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.635 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.635 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783131' 00:05:50.635 killing process with pid 783131 00:05:50.635 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 783131 00:05:50.635 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 783131 00:05:50.894 00:05:50.894 real 0m0.870s 00:05:50.894 user 0m0.803s 00:05:50.894 sys 0m0.429s 00:05:50.894 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.894 16:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.894 ************************************ 00:05:50.894 END TEST default_locks_via_rpc 00:05:50.894 ************************************ 00:05:50.894 16:11:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.894 16:11:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.894 16:11:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.894 16:11:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.894 ************************************ 00:05:50.894 START TEST non_locking_app_on_locked_coremask 00:05:50.894 ************************************ 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=783379 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 783379 /var/tmp/spdk.sock 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 783379 ']' 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.894 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.894 [2024-12-16 16:11:39.445021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:50.894 [2024-12-16 16:11:39.445060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783379 ] 00:05:51.153 [2024-12-16 16:11:39.520191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.153 [2024-12-16 16:11:39.543047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=783388 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 783388 /var/tmp/spdk2.sock 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 783388 ']' 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.153 16:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.412 [2024-12-16 16:11:39.795353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:51.412 [2024-12-16 16:11:39.795398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783388 ] 00:05:51.412 [2024-12-16 16:11:39.881397] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.412 [2024-12-16 16:11:39.881418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.412 [2024-12-16 16:11:39.927786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.347 16:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.347 16:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.347 16:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 783379 00:05:52.347 16:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 783379 00:05:52.347 16:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.605 lslocks: write error 00:05:52.605 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 783379 00:05:52.605 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 783379 ']' 00:05:52.605 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 783379 00:05:52.605 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.605 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.864 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783379 00:05:52.864 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.864 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.864 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783379' 00:05:52.864 killing process with pid 783379 00:05:52.864 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 783379 00:05:52.864 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 783379 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 783388 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 783388 ']' 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 783388 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783388 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783388' 00:05:53.431 killing process with pid 783388 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 783388 00:05:53.431 16:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 783388 00:05:53.690 00:05:53.690 real 0m2.784s 00:05:53.690 user 0m2.943s 00:05:53.690 sys 0m0.935s 00:05:53.690 16:11:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.690 16:11:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.690 ************************************ 00:05:53.690 END TEST non_locking_app_on_locked_coremask 00:05:53.690 ************************************ 00:05:53.690 16:11:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.690 16:11:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.690 16:11:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.690 16:11:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.690 ************************************ 00:05:53.690 START TEST locking_app_on_unlocked_coremask 00:05:53.690 ************************************ 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=783878 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 783878 /var/tmp/spdk.sock 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 783878 ']' 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.690 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.949 [2024-12-16 16:11:42.302568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:53.949 [2024-12-16 16:11:42.302609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783878 ] 00:05:53.949 [2024-12-16 16:11:42.377198] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.949 [2024-12-16 16:11:42.377222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.949 [2024-12-16 16:11:42.399785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=783890 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 783890 /var/tmp/spdk2.sock 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 783890 ']' 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.207 16:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.207 [2024-12-16 16:11:42.648548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:54.207 [2024-12-16 16:11:42.648592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783890 ] 00:05:54.207 [2024-12-16 16:11:42.735232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.207 [2024-12-16 16:11:42.781118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.774 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.774 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.774 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 783890 00:05:54.774 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 783890 00:05:54.774 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.341 lslocks: write error 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 783878 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 783878 ']' 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 783878 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783878 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783878' 00:05:55.341 killing process with pid 783878 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 783878 00:05:55.341 16:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 783878 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 783890 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 783890 ']' 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 783890 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783890 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783890' 00:05:55.908 killing process with pid 783890 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 783890 00:05:55.908 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 783890 00:05:56.167 00:05:56.167 real 0m2.454s 00:05:56.167 user 0m2.483s 00:05:56.167 sys 0m0.926s 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.167 ************************************ 00:05:56.167 END TEST locking_app_on_unlocked_coremask 00:05:56.167 ************************************ 00:05:56.167 16:11:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:56.167 16:11:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.167 16:11:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.167 16:11:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.167 ************************************ 00:05:56.167 START TEST locking_app_on_locked_coremask 00:05:56.167 ************************************ 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=784364 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 784364 /var/tmp/spdk.sock 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.167 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784364 ']' 00:05:56.168 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.168 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.168 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.168 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.168 16:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.427 [2024-12-16 16:11:44.825924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:56.427 [2024-12-16 16:11:44.825966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784364 ] 00:05:56.427 [2024-12-16 16:11:44.900548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.427 [2024-12-16 16:11:44.920351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=784371 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 784371 /var/tmp/spdk2.sock 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 784371 /var/tmp/spdk2.sock 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 784371 /var/tmp/spdk2.sock 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784371 ']' 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.685 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.685 [2024-12-16 16:11:45.176300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:56.685 [2024-12-16 16:11:45.176347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784371 ] 00:05:56.685 [2024-12-16 16:11:45.263377] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 784364 has claimed it. 00:05:56.685 [2024-12-16 16:11:45.263416] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (784371) - No such process 00:05:57.250 ERROR: process (pid: 784371) is no longer running 00:05:57.250 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.250 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 784364 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 784364 00:05:57.251 16:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.818 lslocks: write error 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 784364 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784364 ']' 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784364 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784364 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784364' 00:05:57.818 killing process with pid 784364 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784364 00:05:57.818 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784364 00:05:58.077 00:05:58.077 real 0m1.878s 00:05:58.077 user 0m2.012s 00:05:58.077 sys 0m0.658s 00:05:58.077 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.077 16:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.077 ************************************ 00:05:58.077 END TEST locking_app_on_locked_coremask 00:05:58.077 ************************************ 00:05:58.077 16:11:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.077 16:11:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.335 16:11:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.335 16:11:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.336 ************************************ 00:05:58.336 START TEST locking_overlapped_coremask 00:05:58.336 ************************************ 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=784647 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 784647 /var/tmp/spdk.sock 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 784647 ']' 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.336 16:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.336 [2024-12-16 16:11:46.777304] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:58.336 [2024-12-16 16:11:46.777345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784647 ] 00:05:58.336 [2024-12-16 16:11:46.851074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.336 [2024-12-16 16:11:46.876176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.336 [2024-12-16 16:11:46.876284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.336 [2024-12-16 16:11:46.876284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=784843 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 784843 /var/tmp/spdk2.sock 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 784843 /var/tmp/spdk2.sock 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 784843 /var/tmp/spdk2.sock 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 784843 ']' 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.594 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.594 [2024-12-16 16:11:47.125515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:58.594 [2024-12-16 16:11:47.125563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784843 ] 00:05:58.853 [2024-12-16 16:11:47.214827] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 784647 has claimed it. 00:05:58.853 [2024-12-16 16:11:47.214862] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (784843) - No such process 00:05:59.420 ERROR: process (pid: 784843) is no longer running 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 784647 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 784647 ']' 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 784647 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784647 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784647' 00:05:59.420 killing process with pid 784647 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 784647 00:05:59.420 16:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 784647 00:05:59.679 00:05:59.679 real 0m1.387s 00:05:59.679 user 0m3.847s 00:05:59.679 sys 0m0.387s 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.679 ************************************ 00:05:59.679 END TEST locking_overlapped_coremask 00:05:59.679 ************************************ 00:05:59.679 16:11:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.679 16:11:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.679 16:11:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.679 16:11:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.679 ************************************ 00:05:59.679 START TEST locking_overlapped_coremask_via_rpc 00:05:59.679 ************************************ 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=784966 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 784966 /var/tmp/spdk.sock 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 784966 ']' 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.679 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.679 [2024-12-16 16:11:48.229058] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:59.679 [2024-12-16 16:11:48.229108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784966 ] 00:05:59.938 [2024-12-16 16:11:48.304593] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.938 [2024-12-16 16:11:48.304620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.938 [2024-12-16 16:11:48.328613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.938 [2024-12-16 16:11:48.328725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.938 [2024-12-16 16:11:48.328726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=785101 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 785101 /var/tmp/spdk2.sock 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 785101 ']' 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.938 16:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.196 [2024-12-16 16:11:48.580316] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:00.196 [2024-12-16 16:11:48.580366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785101 ] 00:06:00.196 [2024-12-16 16:11:48.671861] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.196 [2024-12-16 16:11:48.671888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.196 [2024-12-16 16:11:48.720620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.196 [2024-12-16 16:11:48.720738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.196 [2024-12-16 16:11:48.720740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.129 [2024-12-16 16:11:49.439171] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 784966 has claimed it. 00:06:01.129 request: 00:06:01.129 { 00:06:01.129 "method": "framework_enable_cpumask_locks", 00:06:01.129 "req_id": 1 00:06:01.129 } 00:06:01.129 Got JSON-RPC error response 00:06:01.129 response: 00:06:01.129 { 00:06:01.129 "code": -32603, 00:06:01.129 "message": "Failed to claim CPU core: 2" 00:06:01.129 } 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.129 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 784966 /var/tmp/spdk.sock 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 784966 ']' 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 785101 /var/tmp/spdk2.sock 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 785101 ']' 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.130 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.399 00:06:01.399 real 0m1.695s 00:06:01.399 user 0m0.854s 00:06:01.399 sys 0m0.136s 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.399 16:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.399 ************************************ 00:06:01.399 END TEST locking_overlapped_coremask_via_rpc 00:06:01.399 ************************************ 00:06:01.399 16:11:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.399 16:11:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 784966 ]] 00:06:01.399 16:11:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 784966 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 784966 ']' 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 784966 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784966 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.399 16:11:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784966' 00:06:01.400 killing process with pid 784966 00:06:01.400 16:11:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 784966 00:06:01.400 16:11:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 784966 00:06:01.659 16:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 785101 ]] 00:06:01.659 16:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 785101 00:06:01.659 16:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 785101 ']' 00:06:01.659 16:11:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 785101 00:06:01.659 16:11:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785101 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785101' 00:06:01.916 killing process with pid 785101 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 785101 00:06:01.916 16:11:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 785101 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 784966 ]] 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 784966 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 784966 ']' 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 784966 00:06:02.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (784966) - No such process 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 784966 is not found' 00:06:02.175 Process with pid 784966 is not found 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 785101 ]] 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 785101 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 785101 ']' 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 785101 00:06:02.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (785101) - No such process 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 785101 is not found' 00:06:02.175 Process with pid 785101 is not found 00:06:02.175 16:11:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.175 00:06:02.175 real 0m13.637s 00:06:02.175 user 0m23.900s 00:06:02.175 sys 0m4.983s 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.175 16:11:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.175 ************************************ 00:06:02.175 END TEST cpu_locks 00:06:02.175 ************************************ 00:06:02.175 00:06:02.176 real 0m38.271s 00:06:02.176 user 1m13.309s 00:06:02.176 sys 0m8.467s 00:06:02.176 16:11:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.176 16:11:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.176 ************************************ 00:06:02.176 END TEST event 00:06:02.176 ************************************ 00:06:02.176 16:11:50 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.176 16:11:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.176 16:11:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.176 16:11:50 -- common/autotest_common.sh@10 -- # set +x 00:06:02.176 ************************************ 00:06:02.176 START TEST thread 00:06:02.176 ************************************ 00:06:02.176 16:11:50 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.434 * Looking for test storage... 00:06:02.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:02.434 16:11:50 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.434 16:11:50 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.434 16:11:50 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.434 16:11:50 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.434 16:11:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.435 16:11:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.435 16:11:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.435 16:11:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.435 16:11:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.435 16:11:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.435 16:11:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.435 16:11:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.435 16:11:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.435 16:11:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.435 16:11:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.435 16:11:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:02.435 16:11:50 thread -- scripts/common.sh@345 -- # : 1 00:06:02.435 16:11:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.435 16:11:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.435 16:11:50 thread -- scripts/common.sh@365 -- # decimal 1 00:06:02.435 16:11:50 thread -- scripts/common.sh@353 -- # local d=1 00:06:02.435 16:11:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.435 16:11:50 thread -- scripts/common.sh@355 -- # echo 1 00:06:02.435 16:11:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.435 16:11:50 thread -- scripts/common.sh@366 -- # decimal 2 00:06:02.435 16:11:50 thread -- scripts/common.sh@353 -- # local d=2 00:06:02.435 16:11:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.435 16:11:50 thread -- scripts/common.sh@355 -- # echo 2 00:06:02.435 16:11:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.435 16:11:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.435 16:11:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.435 16:11:50 thread -- scripts/common.sh@368 -- # return 0 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.435 --rc genhtml_branch_coverage=1 00:06:02.435 --rc genhtml_function_coverage=1 00:06:02.435 --rc genhtml_legend=1 00:06:02.435 --rc geninfo_all_blocks=1 00:06:02.435 --rc geninfo_unexecuted_blocks=1 00:06:02.435 00:06:02.435 ' 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.435 --rc genhtml_branch_coverage=1 00:06:02.435 --rc genhtml_function_coverage=1 00:06:02.435 --rc genhtml_legend=1 00:06:02.435 --rc geninfo_all_blocks=1 00:06:02.435 --rc geninfo_unexecuted_blocks=1 00:06:02.435 00:06:02.435 ' 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.435 --rc genhtml_branch_coverage=1 00:06:02.435 --rc genhtml_function_coverage=1 00:06:02.435 --rc genhtml_legend=1 00:06:02.435 --rc geninfo_all_blocks=1 00:06:02.435 --rc geninfo_unexecuted_blocks=1 00:06:02.435 00:06:02.435 ' 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.435 --rc genhtml_branch_coverage=1 00:06:02.435 --rc genhtml_function_coverage=1 00:06:02.435 --rc genhtml_legend=1 00:06:02.435 --rc geninfo_all_blocks=1 00:06:02.435 --rc geninfo_unexecuted_blocks=1 00:06:02.435 00:06:02.435 ' 00:06:02.435 16:11:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.435 16:11:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.435 ************************************ 00:06:02.435 START TEST thread_poller_perf 00:06:02.435 ************************************ 00:06:02.435 16:11:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.435 [2024-12-16 16:11:50.967902] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:02.435 [2024-12-16 16:11:50.967970] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785559 ] 00:06:02.693 [2024-12-16 16:11:51.047258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.693 [2024-12-16 16:11:51.069358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.693 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.629 [2024-12-16T15:11:52.238Z] ====================================== 00:06:03.629 [2024-12-16T15:11:52.238Z] busy:2107128094 (cyc) 00:06:03.629 [2024-12-16T15:11:52.238Z] total_run_count: 423000 00:06:03.629 [2024-12-16T15:11:52.238Z] tsc_hz: 2100000000 (cyc) 00:06:03.629 [2024-12-16T15:11:52.238Z] ====================================== 00:06:03.629 [2024-12-16T15:11:52.238Z] poller_cost: 4981 (cyc), 2371 (nsec) 00:06:03.629 00:06:03.629 real 0m1.161s 00:06:03.629 user 0m1.074s 00:06:03.629 sys 0m0.083s 00:06:03.629 16:11:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.629 16:11:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.629 ************************************ 00:06:03.629 END TEST thread_poller_perf 00:06:03.629 ************************************ 00:06:03.629 16:11:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.629 16:11:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.629 16:11:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.629 16:11:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.629 ************************************ 00:06:03.629 START TEST thread_poller_perf 00:06:03.629 ************************************ 00:06:03.629 16:11:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.629 [2024-12-16 16:11:52.193865] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:03.629 [2024-12-16 16:11:52.193939] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785733 ] 00:06:03.888 [2024-12-16 16:11:52.270340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.888 [2024-12-16 16:11:52.292251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.888 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.825 [2024-12-16T15:11:53.434Z] ====================================== 00:06:04.825 [2024-12-16T15:11:53.434Z] busy:2101434650 (cyc) 00:06:04.825 [2024-12-16T15:11:53.434Z] total_run_count: 5111000 00:06:04.825 [2024-12-16T15:11:53.434Z] tsc_hz: 2100000000 (cyc) 00:06:04.825 [2024-12-16T15:11:53.434Z] ====================================== 00:06:04.825 [2024-12-16T15:11:53.434Z] poller_cost: 411 (cyc), 195 (nsec) 00:06:04.825 00:06:04.825 real 0m1.149s 00:06:04.825 user 0m1.071s 00:06:04.825 sys 0m0.074s 00:06:04.825 16:11:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.825 16:11:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.825 ************************************ 00:06:04.825 END TEST thread_poller_perf 00:06:04.825 ************************************ 00:06:04.825 16:11:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.825 00:06:04.825 real 0m2.626s 00:06:04.825 user 0m2.314s 00:06:04.825 sys 0m0.329s 00:06:04.825 16:11:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.825 16:11:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.825 ************************************ 00:06:04.825 END TEST thread 00:06:04.825 ************************************ 00:06:04.825 16:11:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:04.825 16:11:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.825 16:11:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.825 16:11:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.825 16:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.825 ************************************ 00:06:04.825 START TEST app_cmdline 00:06:04.825 ************************************ 00:06:04.825 16:11:53 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:05.084 * Looking for test storage... 00:06:05.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.084 16:11:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.084 --rc genhtml_branch_coverage=1 00:06:05.084 --rc genhtml_function_coverage=1 00:06:05.084 --rc genhtml_legend=1 00:06:05.084 --rc geninfo_all_blocks=1 00:06:05.084 --rc geninfo_unexecuted_blocks=1 00:06:05.084 00:06:05.084 ' 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.084 --rc genhtml_branch_coverage=1 00:06:05.084 --rc genhtml_function_coverage=1 00:06:05.084 --rc genhtml_legend=1 00:06:05.084 --rc geninfo_all_blocks=1 00:06:05.084 --rc geninfo_unexecuted_blocks=1 00:06:05.084 00:06:05.084 ' 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.084 --rc genhtml_branch_coverage=1 00:06:05.084 --rc genhtml_function_coverage=1 00:06:05.084 --rc genhtml_legend=1 00:06:05.084 --rc geninfo_all_blocks=1 00:06:05.084 --rc geninfo_unexecuted_blocks=1 00:06:05.084 00:06:05.084 ' 00:06:05.084 16:11:53 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.084 --rc genhtml_branch_coverage=1 00:06:05.084 --rc genhtml_function_coverage=1 00:06:05.084 --rc genhtml_legend=1 00:06:05.085 --rc geninfo_all_blocks=1 00:06:05.085 --rc geninfo_unexecuted_blocks=1 00:06:05.085 00:06:05.085 ' 00:06:05.085 16:11:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:05.085 16:11:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=786064 00:06:05.085 16:11:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 786064 00:06:05.085 16:11:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:05.085 16:11:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 786064 ']' 00:06:05.085 16:11:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.085 16:11:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.085 16:11:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.085 16:11:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.085 16:11:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 [2024-12-16 16:11:53.658217] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:05.085 [2024-12-16 16:11:53.658265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786064 ] 00:06:05.343 [2024-12-16 16:11:53.734147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.343 [2024-12-16 16:11:53.757084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.603 16:11:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.603 16:11:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:05.603 16:11:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:05.603 { 00:06:05.603 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:05.603 "fields": { 00:06:05.603 "major": 25, 00:06:05.603 "minor": 1, 00:06:05.603 "patch": 0, 00:06:05.603 "suffix": "-pre", 00:06:05.603 "commit": "e01cb43b8" 00:06:05.603 } 00:06:05.603 } 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.603 16:11:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:05.603 16:11:54 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.925 request: 00:06:05.925 { 00:06:05.925 "method": "env_dpdk_get_mem_stats", 00:06:05.925 "req_id": 1 00:06:05.925 } 00:06:05.925 Got JSON-RPC error response 00:06:05.925 response: 00:06:05.925 { 00:06:05.925 "code": -32601, 00:06:05.925 "message": "Method not found" 00:06:05.925 } 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.925 16:11:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 786064 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 786064 ']' 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 786064 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786064 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786064' 00:06:05.925 killing process with pid 786064 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@973 -- # kill 786064 00:06:05.925 16:11:54 app_cmdline -- common/autotest_common.sh@978 -- # wait 786064 00:06:06.218 00:06:06.218 real 0m1.296s 00:06:06.218 user 0m1.510s 00:06:06.218 sys 0m0.451s 00:06:06.218 16:11:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.218 16:11:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.218 ************************************ 00:06:06.218 END TEST app_cmdline 00:06:06.218 ************************************ 00:06:06.218 16:11:54 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:06.218 16:11:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.218 16:11:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.218 16:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:06.218 ************************************ 00:06:06.218 START TEST version 00:06:06.218 ************************************ 00:06:06.218 16:11:54 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:06.518 * Looking for test storage... 00:06:06.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:06.518 16:11:54 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.518 16:11:54 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.518 16:11:54 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.518 16:11:54 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.518 16:11:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.518 16:11:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.518 16:11:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.518 16:11:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.518 16:11:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.518 16:11:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.518 16:11:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.518 16:11:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.518 16:11:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.518 16:11:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.518 16:11:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.518 16:11:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:06.518 16:11:54 version -- scripts/common.sh@345 -- # : 1 00:06:06.518 16:11:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.518 16:11:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.518 16:11:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:06.518 16:11:54 version -- scripts/common.sh@353 -- # local d=1 00:06:06.518 16:11:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.518 16:11:54 version -- scripts/common.sh@355 -- # echo 1 00:06:06.518 16:11:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.518 16:11:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:06.518 16:11:54 version -- scripts/common.sh@353 -- # local d=2 00:06:06.518 16:11:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.518 16:11:54 version -- scripts/common.sh@355 -- # echo 2 00:06:06.518 16:11:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.518 16:11:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.518 16:11:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.518 16:11:54 version -- scripts/common.sh@368 -- # return 0 00:06:06.518 16:11:54 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.519 16:11:54 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.519 --rc genhtml_branch_coverage=1 00:06:06.519 --rc genhtml_function_coverage=1 00:06:06.519 --rc genhtml_legend=1 00:06:06.519 --rc geninfo_all_blocks=1 00:06:06.519 --rc geninfo_unexecuted_blocks=1 00:06:06.519 00:06:06.519 ' 00:06:06.519 16:11:54 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.519 --rc genhtml_branch_coverage=1 00:06:06.519 --rc genhtml_function_coverage=1 00:06:06.519 --rc genhtml_legend=1 00:06:06.519 --rc geninfo_all_blocks=1 00:06:06.519 --rc geninfo_unexecuted_blocks=1 00:06:06.519 00:06:06.519 ' 00:06:06.519 16:11:54 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.519 --rc genhtml_branch_coverage=1 00:06:06.519 --rc genhtml_function_coverage=1 00:06:06.519 --rc genhtml_legend=1 00:06:06.519 --rc geninfo_all_blocks=1 00:06:06.519 --rc geninfo_unexecuted_blocks=1 00:06:06.519 00:06:06.519 ' 00:06:06.519 16:11:54 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.519 --rc genhtml_branch_coverage=1 00:06:06.519 --rc genhtml_function_coverage=1 00:06:06.519 --rc genhtml_legend=1 00:06:06.519 --rc geninfo_all_blocks=1 00:06:06.519 --rc geninfo_unexecuted_blocks=1 00:06:06.519 00:06:06.519 ' 00:06:06.519 16:11:54 version -- app/version.sh@17 -- # get_header_version major 00:06:06.519 16:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.519 16:11:54 version -- app/version.sh@17 -- # major=25 00:06:06.519 16:11:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.519 16:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.519 16:11:54 version -- app/version.sh@18 -- # minor=1 00:06:06.519 16:11:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.519 16:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.519 16:11:54 version -- app/version.sh@19 -- # patch=0 00:06:06.519 16:11:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.519 16:11:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.519 16:11:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.519 16:11:55 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.519 16:11:55 version -- app/version.sh@22 -- # version=25.1 00:06:06.519 16:11:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.519 16:11:55 version -- app/version.sh@28 -- # version=25.1rc0 00:06:06.519 16:11:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:06.519 16:11:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.519 16:11:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:06.519 16:11:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:06.519 00:06:06.519 real 0m0.248s 00:06:06.519 user 0m0.152s 00:06:06.519 sys 0m0.139s 00:06:06.519 16:11:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.519 16:11:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.519 ************************************ 00:06:06.519 END TEST version 00:06:06.519 ************************************ 00:06:06.519 16:11:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:06.519 16:11:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:06.519 16:11:55 -- spdk/autotest.sh@194 -- # uname -s 00:06:06.519 16:11:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:06.519 16:11:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.519 16:11:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.519 16:11:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:06.519 16:11:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:06.519 16:11:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:06.519 16:11:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.519 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:06.808 16:11:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:06.808 16:11:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:06.808 16:11:55 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:06.808 16:11:55 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:06.808 16:11:55 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:06.808 16:11:55 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:06.808 16:11:55 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.808 16:11:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.808 16:11:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.808 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:06.808 ************************************ 00:06:06.808 START TEST nvmf_tcp 00:06:06.808 ************************************ 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.808 * Looking for test storage... 00:06:06.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.808 16:11:55 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.808 --rc genhtml_branch_coverage=1 00:06:06.808 --rc genhtml_function_coverage=1 00:06:06.808 --rc genhtml_legend=1 00:06:06.808 --rc geninfo_all_blocks=1 00:06:06.808 --rc geninfo_unexecuted_blocks=1 00:06:06.808 00:06:06.808 ' 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.808 --rc genhtml_branch_coverage=1 00:06:06.808 --rc genhtml_function_coverage=1 00:06:06.808 --rc genhtml_legend=1 00:06:06.808 --rc geninfo_all_blocks=1 00:06:06.808 --rc geninfo_unexecuted_blocks=1 00:06:06.808 00:06:06.808 ' 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.808 --rc genhtml_branch_coverage=1 00:06:06.808 --rc genhtml_function_coverage=1 00:06:06.808 --rc genhtml_legend=1 00:06:06.808 --rc geninfo_all_blocks=1 00:06:06.808 --rc geninfo_unexecuted_blocks=1 00:06:06.808 00:06:06.808 ' 00:06:06.808 16:11:55 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.808 --rc genhtml_branch_coverage=1 00:06:06.808 --rc genhtml_function_coverage=1 00:06:06.809 --rc genhtml_legend=1 00:06:06.809 --rc geninfo_all_blocks=1 00:06:06.809 --rc geninfo_unexecuted_blocks=1 00:06:06.809 00:06:06.809 ' 00:06:06.809 16:11:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:06.809 16:11:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:06.809 16:11:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:06.809 16:11:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.809 16:11:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.809 16:11:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.809 ************************************ 00:06:06.809 START TEST nvmf_target_core 00:06:06.809 ************************************ 00:06:06.809 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:07.087 * Looking for test storage... 00:06:07.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.087 --rc genhtml_branch_coverage=1 00:06:07.087 --rc genhtml_function_coverage=1 00:06:07.087 --rc genhtml_legend=1 00:06:07.087 --rc geninfo_all_blocks=1 00:06:07.087 --rc geninfo_unexecuted_blocks=1 00:06:07.087 00:06:07.087 ' 00:06:07.087 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.087 --rc genhtml_branch_coverage=1 00:06:07.087 --rc genhtml_function_coverage=1 00:06:07.087 --rc genhtml_legend=1 00:06:07.088 --rc geninfo_all_blocks=1 00:06:07.088 --rc geninfo_unexecuted_blocks=1 00:06:07.088 00:06:07.088 ' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.088 --rc genhtml_branch_coverage=1 00:06:07.088 --rc genhtml_function_coverage=1 00:06:07.088 --rc genhtml_legend=1 00:06:07.088 --rc geninfo_all_blocks=1 00:06:07.088 --rc geninfo_unexecuted_blocks=1 00:06:07.088 00:06:07.088 ' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.088 --rc genhtml_branch_coverage=1 00:06:07.088 --rc genhtml_function_coverage=1 00:06:07.088 --rc genhtml_legend=1 00:06:07.088 --rc geninfo_all_blocks=1 00:06:07.088 --rc geninfo_unexecuted_blocks=1 00:06:07.088 00:06:07.088 ' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.088 ************************************ 00:06:07.088 START TEST nvmf_abort 00:06:07.088 ************************************ 00:06:07.088 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:07.088 * Looking for test storage... 00:06:07.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.348 --rc genhtml_branch_coverage=1 00:06:07.348 --rc genhtml_function_coverage=1 00:06:07.348 --rc genhtml_legend=1 00:06:07.348 --rc geninfo_all_blocks=1 00:06:07.348 --rc geninfo_unexecuted_blocks=1 00:06:07.348 00:06:07.348 ' 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.348 --rc genhtml_branch_coverage=1 00:06:07.348 --rc genhtml_function_coverage=1 00:06:07.348 --rc genhtml_legend=1 00:06:07.348 --rc geninfo_all_blocks=1 00:06:07.348 --rc geninfo_unexecuted_blocks=1 00:06:07.348 00:06:07.348 ' 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.348 --rc genhtml_branch_coverage=1 00:06:07.348 --rc genhtml_function_coverage=1 00:06:07.348 --rc genhtml_legend=1 00:06:07.348 --rc geninfo_all_blocks=1 00:06:07.348 --rc geninfo_unexecuted_blocks=1 00:06:07.348 00:06:07.348 ' 00:06:07.348 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.348 --rc genhtml_branch_coverage=1 00:06:07.348 --rc genhtml_function_coverage=1 00:06:07.348 --rc genhtml_legend=1 00:06:07.348 --rc geninfo_all_blocks=1 00:06:07.348 --rc geninfo_unexecuted_blocks=1 00:06:07.349 00:06:07.349 ' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.349 16:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.918 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:13.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:13.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:13.919 Found net devices under 0000:af:00.0: cvl_0_0 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:13.919 Found net devices under 0000:af:00.1: cvl_0_1 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:06:13.919 00:06:13.919 --- 10.0.0.2 ping statistics --- 00:06:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.919 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:06:13.919 00:06:13.919 --- 10.0.0.1 ping statistics --- 00:06:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.919 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=789797 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 789797 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 789797 ']' 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.919 16:12:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.919 [2024-12-16 16:12:01.866532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:13.919 [2024-12-16 16:12:01.866576] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.919 [2024-12-16 16:12:01.946036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.919 [2024-12-16 16:12:01.969744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.919 [2024-12-16 16:12:01.969781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.919 [2024-12-16 16:12:01.969788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.919 [2024-12-16 16:12:01.969793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.920 [2024-12-16 16:12:01.969798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.920 [2024-12-16 16:12:01.971073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.920 [2024-12-16 16:12:01.971162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.920 [2024-12-16 16:12:01.971163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 [2024-12-16 16:12:02.102865] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 Malloc0 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 Delay0 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 [2024-12-16 16:12:02.189478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.920 16:12:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:13.920 [2024-12-16 16:12:02.323416] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:15.821 Initializing NVMe Controllers 00:06:15.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:15.821 controller IO queue size 128 less than required 00:06:15.821 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:15.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:15.821 Initialization complete. Launching workers. 00:06:15.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37666 00:06:15.821 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37727, failed to submit 62 00:06:15.821 success 37670, unsuccessful 57, failed 0 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:15.821 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:15.821 rmmod nvme_tcp 00:06:16.080 rmmod nvme_fabrics 00:06:16.080 rmmod nvme_keyring 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 789797 ']' 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 789797 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 789797 ']' 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 789797 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789797 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789797' 00:06:16.080 killing process with pid 789797 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 789797 00:06:16.080 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 789797 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.339 16:12:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:18.266 00:06:18.266 real 0m11.169s 00:06:18.266 user 0m11.608s 00:06:18.266 sys 0m5.387s 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.266 ************************************ 00:06:18.266 END TEST nvmf_abort 00:06:18.266 ************************************ 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:18.266 ************************************ 00:06:18.266 START TEST nvmf_ns_hotplug_stress 00:06:18.266 ************************************ 00:06:18.266 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:18.526 * Looking for test storage... 00:06:18.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.526 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.526 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.526 16:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.526 --rc genhtml_branch_coverage=1 00:06:18.526 --rc genhtml_function_coverage=1 00:06:18.526 --rc genhtml_legend=1 00:06:18.526 --rc geninfo_all_blocks=1 00:06:18.526 --rc geninfo_unexecuted_blocks=1 00:06:18.526 00:06:18.526 ' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.526 --rc genhtml_branch_coverage=1 00:06:18.526 --rc genhtml_function_coverage=1 00:06:18.526 --rc genhtml_legend=1 00:06:18.526 --rc geninfo_all_blocks=1 00:06:18.526 --rc geninfo_unexecuted_blocks=1 00:06:18.526 00:06:18.526 ' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.526 --rc genhtml_branch_coverage=1 00:06:18.526 --rc genhtml_function_coverage=1 00:06:18.526 --rc genhtml_legend=1 00:06:18.526 --rc geninfo_all_blocks=1 00:06:18.526 --rc geninfo_unexecuted_blocks=1 00:06:18.526 00:06:18.526 ' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.526 --rc genhtml_branch_coverage=1 00:06:18.526 --rc genhtml_function_coverage=1 00:06:18.526 --rc genhtml_legend=1 00:06:18.526 --rc geninfo_all_blocks=1 00:06:18.526 --rc geninfo_unexecuted_blocks=1 00:06:18.526 00:06:18.526 ' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.526 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:18.527 16:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:25.097 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:25.097 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.097 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:25.098 Found net devices under 0000:af:00.0: cvl_0_0 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:25.098 Found net devices under 0000:af:00.1: cvl_0_1 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:25.098 16:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:25.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:25.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:06:25.098 00:06:25.098 --- 10.0.0.2 ping statistics --- 00:06:25.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.098 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:25.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:25.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:06:25.098 00:06:25.098 --- 10.0.0.1 ping statistics --- 00:06:25.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.098 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=794266 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 794266 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 794266 ']' 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.098 [2024-12-16 16:12:13.113899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:25.098 [2024-12-16 16:12:13.113948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.098 [2024-12-16 16:12:13.194011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.098 [2024-12-16 16:12:13.215722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.098 [2024-12-16 16:12:13.215758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.098 [2024-12-16 16:12:13.215765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.098 [2024-12-16 16:12:13.215771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.098 [2024-12-16 16:12:13.215775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.098 [2024-12-16 16:12:13.217199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.098 [2024-12-16 16:12:13.217230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.098 [2024-12-16 16:12:13.217231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:25.098 [2024-12-16 16:12:13.516985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.098 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:25.356 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.356 [2024-12-16 16:12:13.898385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.356 16:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:25.614 16:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:25.873 Malloc0 00:06:25.873 16:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.130 Delay0 00:06:26.130 16:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.130 16:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:26.388 NULL1 00:06:26.388 16:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:26.645 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=794550 00:06:26.645 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:26.645 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:26.645 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.903 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.160 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:27.160 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:27.160 true 00:06:27.160 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:27.160 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.417 16:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.674 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:27.674 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:27.932 true 00:06:27.932 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:27.932 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.190 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.448 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:28.448 16:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:28.448 true 00:06:28.448 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:28.448 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.705 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.963 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:28.963 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:29.220 true 00:06:29.220 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:29.220 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.477 16:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.735 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:29.735 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:29.735 true 00:06:29.735 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:29.735 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.992 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.250 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:30.250 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:30.508 true 00:06:30.508 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:30.508 16:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.766 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.766 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:30.766 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:31.023 true 00:06:31.023 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:31.023 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.281 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.539 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:31.539 16:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:31.797 true 00:06:31.797 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:31.797 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.056 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.056 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:32.056 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:32.314 true 00:06:32.314 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:32.314 16:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.573 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.831 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:32.831 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:33.089 true 00:06:33.089 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:33.089 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.089 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.347 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:33.347 16:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:33.604 true 00:06:33.604 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:33.604 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.863 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.120 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:34.120 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:34.120 true 00:06:34.378 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:34.378 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.378 16:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.637 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:34.637 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:34.895 true 00:06:34.895 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:34.895 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.153 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.411 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:35.411 16:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:35.411 true 00:06:35.411 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:35.411 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.669 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.928 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:35.928 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:36.185 true 00:06:36.185 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:36.185 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.524 16:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.524 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:36.524 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:36.781 true 00:06:36.781 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:36.781 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.038 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.295 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:37.295 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:37.295 true 00:06:37.295 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:37.295 16:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.553 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.810 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:37.811 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:38.068 true 00:06:38.068 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:38.068 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.325 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.582 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:38.582 16:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:38.582 true 00:06:38.839 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:38.839 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.839 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.096 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:39.096 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:39.353 true 00:06:39.353 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:39.353 16:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.611 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.869 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:39.869 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:40.126 true 00:06:40.127 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:40.127 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.127 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.384 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:40.384 16:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:40.641 true 00:06:40.642 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:40.642 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.899 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.157 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:41.157 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:41.415 true 00:06:41.415 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:41.415 16:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.672 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.672 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:41.672 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:41.929 true 00:06:41.929 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:41.929 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.186 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.443 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:42.443 16:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:42.700 true 00:06:42.700 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:42.700 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.957 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.957 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:42.957 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:43.215 true 00:06:43.215 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:43.215 16:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.472 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.729 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:43.729 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:43.986 true 00:06:43.986 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:43.986 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.243 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.500 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:44.500 16:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:44.500 true 00:06:44.500 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:44.500 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.758 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.015 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:45.015 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:45.272 true 00:06:45.272 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:45.272 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.529 16:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.786 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:45.786 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:45.786 true 00:06:45.786 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:45.786 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.044 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.301 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:46.301 16:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:46.558 true 00:06:46.558 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:46.558 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.815 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.815 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:46.815 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:47.087 true 00:06:47.087 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:47.087 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.345 16:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.602 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:47.602 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:47.859 true 00:06:47.859 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:47.859 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.116 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.116 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:48.116 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:48.374 true 00:06:48.374 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:48.374 16:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.631 16:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.888 16:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:48.888 16:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:49.144 true 00:06:49.145 16:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:49.145 16:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.402 16:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.402 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:49.402 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:49.660 true 00:06:49.660 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:49.660 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.917 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.175 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:50.175 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:50.433 true 00:06:50.433 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:50.433 16:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.690 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.948 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:50.948 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:50.948 true 00:06:50.948 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:50.948 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.206 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.466 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:51.466 16:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:51.723 true 00:06:51.723 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:51.724 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.981 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.238 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:52.238 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:52.238 true 00:06:52.238 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:52.238 16:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.496 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.754 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:52.754 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:53.012 true 00:06:53.012 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:53.012 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.270 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.528 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:53.528 16:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:53.528 true 00:06:53.528 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:53.528 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.785 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.043 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:54.043 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:54.301 true 00:06:54.301 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:54.301 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.559 16:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.816 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:54.817 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:54.817 true 00:06:54.817 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:54.817 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.074 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.332 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:55.332 16:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:55.589 true 00:06:55.589 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:55.589 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.847 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.847 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:55.847 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:56.105 true 00:06:56.105 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:56.105 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.362 16:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.620 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:56.620 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:56.877 true 00:06:56.877 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:56.877 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.877 Initializing NVMe Controllers 00:06:56.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.877 Controller IO queue size 128, less than required. 00:06:56.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:56.877 Initialization complete. Launching workers. 00:06:56.877 ======================================================== 00:06:56.877 Latency(us) 00:06:56.877 Device Information : IOPS MiB/s Average min max 00:06:56.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27626.93 13.49 4632.94 1533.13 8733.15 00:06:56.877 ======================================================== 00:06:56.877 Total : 27626.93 13.49 4632.94 1533.13 8733.15 00:06:56.877 00:06:57.136 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.136 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:57.136 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:57.394 true 00:06:57.394 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 794550 00:06:57.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (794550) - No such process 00:06:57.394 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 794550 00:06:57.394 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.652 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:57.927 null0 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.927 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:58.186 null1 00:06:58.186 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.186 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.186 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:58.445 null2 00:06:58.445 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.445 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.445 16:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:58.703 null3 00:06:58.703 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.703 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.703 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:58.703 null4 00:06:58.703 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.703 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.703 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:58.961 null5 00:06:58.962 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.962 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.962 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:59.220 null6 00:06:59.220 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.220 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.220 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:59.479 null7 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.479 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 800066 800068 800071 800074 800076 800078 800080 800083 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.480 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.480 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.480 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.739 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.998 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.256 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.257 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.515 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.515 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.773 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.032 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.291 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.549 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.550 16:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.550 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.550 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.550 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.808 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.068 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.327 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.587 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.587 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.587 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.587 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.587 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.587 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.587 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.587 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.587 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.587 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.588 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.588 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.588 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.588 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.847 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.107 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.366 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.626 rmmod nvme_tcp 00:07:03.626 rmmod nvme_fabrics 00:07:03.626 rmmod nvme_keyring 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 794266 ']' 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 794266 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 794266 ']' 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 794266 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794266 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794266' 00:07:03.626 killing process with pid 794266 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 794266 00:07:03.626 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 794266 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.886 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:06.427 00:07:06.427 real 0m47.568s 00:07:06.427 user 3m22.367s 00:07:06.427 sys 0m17.143s 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.427 ************************************ 00:07:06.427 END TEST nvmf_ns_hotplug_stress 00:07:06.427 ************************************ 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.427 ************************************ 00:07:06.427 START TEST nvmf_delete_subsystem 00:07:06.427 ************************************ 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:06.427 * Looking for test storage... 00:07:06.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.427 --rc genhtml_branch_coverage=1 00:07:06.427 --rc genhtml_function_coverage=1 00:07:06.427 --rc genhtml_legend=1 00:07:06.427 --rc geninfo_all_blocks=1 00:07:06.427 --rc geninfo_unexecuted_blocks=1 00:07:06.427 00:07:06.427 ' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.427 --rc genhtml_branch_coverage=1 00:07:06.427 --rc genhtml_function_coverage=1 00:07:06.427 --rc genhtml_legend=1 00:07:06.427 --rc geninfo_all_blocks=1 00:07:06.427 --rc geninfo_unexecuted_blocks=1 00:07:06.427 00:07:06.427 ' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.427 --rc genhtml_branch_coverage=1 00:07:06.427 --rc genhtml_function_coverage=1 00:07:06.427 --rc genhtml_legend=1 00:07:06.427 --rc geninfo_all_blocks=1 00:07:06.427 --rc geninfo_unexecuted_blocks=1 00:07:06.427 00:07:06.427 ' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.427 --rc genhtml_branch_coverage=1 00:07:06.427 --rc genhtml_function_coverage=1 00:07:06.427 --rc genhtml_legend=1 00:07:06.427 --rc geninfo_all_blocks=1 00:07:06.427 --rc geninfo_unexecuted_blocks=1 00:07:06.427 00:07:06.427 ' 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.427 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.428 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:13.013 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:13.013 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:13.013 Found net devices under 0000:af:00.0: cvl_0_0 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:13.013 Found net devices under 0000:af:00.1: cvl_0_1 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.013 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:07:13.014 00:07:13.014 --- 10.0.0.2 ping statistics --- 00:07:13.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.014 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:13.014 00:07:13.014 --- 10.0.0.1 ping statistics --- 00:07:13.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.014 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=804598 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 804598 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 804598 ']' 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.014 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.014 [2024-12-16 16:13:00.868535] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:13.014 [2024-12-16 16:13:00.868576] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.014 [2024-12-16 16:13:00.931606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.014 [2024-12-16 16:13:00.953936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.014 [2024-12-16 16:13:00.953972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.014 [2024-12-16 16:13:00.953979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.014 [2024-12-16 16:13:00.953986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.014 [2024-12-16 16:13:00.953991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.014 [2024-12-16 16:13:00.957111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.014 [2024-12-16 16:13:00.957114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.014 [2024-12-16 16:13:01.093008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.014 [2024-12-16 16:13:01.113241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.014 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.015 NULL1 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.015 Delay0 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=804625 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:13.015 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:13.015 [2024-12-16 16:13:01.224178] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:14.643 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.643 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.643 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 starting I/O failed: -6 00:07:14.902 Write completed with error (sct=0, sc=8) 00:07:14.902 Write completed with error (sct=0, sc=8) 00:07:14.902 Write completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 starting I/O failed: -6 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Write completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 starting I/O failed: -6 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 starting I/O failed: -6 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 starting I/O failed: -6 00:07:14.902 Write completed with error (sct=0, sc=8) 00:07:14.902 Read completed with error (sct=0, sc=8) 00:07:14.902 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 starting I/O failed: -6 00:07:14.903 starting I/O failed: -6 00:07:14.903 starting I/O failed: -6 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 starting I/O failed: -6 00:07:14.903 [2024-12-16 16:13:03.264051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f44000c80 is same with the state(6) to be set 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.903 Read completed with error (sct=0, sc=8) 00:07:14.903 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:14.904 Read completed with error (sct=0, sc=8) 00:07:14.904 Write completed with error (sct=0, sc=8) 00:07:15.839 [2024-12-16 16:13:04.237203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed190 is same with the state(6) to be set 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Write completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.839 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 [2024-12-16 16:13:04.263596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eef70 is same with the state(6) to be set 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 [2024-12-16 16:13:04.263967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ef5e0 is same with the state(6) to be set 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 [2024-12-16 16:13:04.266671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f4400d800 is same with the state(6) to be set 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Write completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 Read completed with error (sct=0, sc=8) 00:07:15.840 [2024-12-16 16:13:04.267278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f4400d060 is same with the state(6) to be set 00:07:15.840 Initializing NVMe Controllers 00:07:15.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.840 Controller IO queue size 128, less than required. 00:07:15.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:15.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:15.840 Initialization complete. Launching workers. 00:07:15.840 ======================================================== 00:07:15.840 Latency(us) 00:07:15.840 Device Information : IOPS MiB/s Average min max 00:07:15.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.19 0.09 908412.11 357.19 1006566.38 00:07:15.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.28 0.08 912489.37 287.54 1011494.39 00:07:15.840 ======================================================== 00:07:15.840 Total : 346.47 0.17 910321.86 287.54 1011494.39 00:07:15.840 00:07:15.840 [2024-12-16 16:13:04.267838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ed190 (9): Bad file descriptor 00:07:15.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:15.840 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.840 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:15.840 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 804625 00:07:15.840 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 804625 00:07:16.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (804625) - No such process 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 804625 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 804625 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 804625 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 [2024-12-16 16:13:04.796863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=805304 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:16.407 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.407 [2024-12-16 16:13:04.886188] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:16.972 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.972 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:16.972 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.230 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.230 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:17.230 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.794 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.794 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:17.794 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.360 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.360 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:18.360 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.927 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.927 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:18.927 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.494 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.494 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:19.494 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:19.494 Initializing NVMe Controllers 00:07:19.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.494 Controller IO queue size 128, less than required. 00:07:19.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:19.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:19.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:19.494 Initialization complete. Launching workers. 00:07:19.494 ======================================================== 00:07:19.494 Latency(us) 00:07:19.494 Device Information : IOPS MiB/s Average min max 00:07:19.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002170.97 1000178.59 1041181.54 00:07:19.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003696.19 1000186.94 1009962.15 00:07:19.494 ======================================================== 00:07:19.494 Total : 256.00 0.12 1002933.58 1000178.59 1041181.54 00:07:19.494 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805304 00:07:19.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (805304) - No such process 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 805304 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.753 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.753 rmmod nvme_tcp 00:07:20.013 rmmod nvme_fabrics 00:07:20.013 rmmod nvme_keyring 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 804598 ']' 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 804598 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 804598 ']' 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 804598 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804598 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804598' 00:07:20.013 killing process with pid 804598 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 804598 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 804598 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.013 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.272 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.272 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:20.272 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.272 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.272 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.180 00:07:22.180 real 0m16.180s 00:07:22.180 user 0m28.997s 00:07:22.180 sys 0m5.335s 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.180 ************************************ 00:07:22.180 END TEST nvmf_delete_subsystem 00:07:22.180 ************************************ 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.180 ************************************ 00:07:22.180 START TEST nvmf_host_management 00:07:22.180 ************************************ 00:07:22.180 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.440 * Looking for test storage... 00:07:22.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.440 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:22.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.440 --rc genhtml_branch_coverage=1 00:07:22.440 --rc genhtml_function_coverage=1 00:07:22.440 --rc genhtml_legend=1 00:07:22.440 --rc geninfo_all_blocks=1 00:07:22.440 --rc geninfo_unexecuted_blocks=1 00:07:22.441 00:07:22.441 ' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.441 --rc genhtml_branch_coverage=1 00:07:22.441 --rc genhtml_function_coverage=1 00:07:22.441 --rc genhtml_legend=1 00:07:22.441 --rc geninfo_all_blocks=1 00:07:22.441 --rc geninfo_unexecuted_blocks=1 00:07:22.441 00:07:22.441 ' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.441 --rc genhtml_branch_coverage=1 00:07:22.441 --rc genhtml_function_coverage=1 00:07:22.441 --rc genhtml_legend=1 00:07:22.441 --rc geninfo_all_blocks=1 00:07:22.441 --rc geninfo_unexecuted_blocks=1 00:07:22.441 00:07:22.441 ' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.441 --rc genhtml_branch_coverage=1 00:07:22.441 --rc genhtml_function_coverage=1 00:07:22.441 --rc genhtml_legend=1 00:07:22.441 --rc geninfo_all_blocks=1 00:07:22.441 --rc geninfo_unexecuted_blocks=1 00:07:22.441 00:07:22.441 ' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.441 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:29.014 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:29.014 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:29.014 Found net devices under 0000:af:00.0: cvl_0_0 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:29.014 Found net devices under 0000:af:00.1: cvl_0_1 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.014 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:29.015 00:07:29.015 --- 10.0.0.2 ping statistics --- 00:07:29.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.015 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:07:29.015 00:07:29.015 --- 10.0.0.1 ping statistics --- 00:07:29.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.015 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=809447 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 809447 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 809447 ']' 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.015 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 [2024-12-16 16:13:16.990138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:29.015 [2024-12-16 16:13:16.990179] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.015 [2024-12-16 16:13:17.066707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.015 [2024-12-16 16:13:17.089530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.015 [2024-12-16 16:13:17.089566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.015 [2024-12-16 16:13:17.089573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.015 [2024-12-16 16:13:17.089578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.015 [2024-12-16 16:13:17.089584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.015 [2024-12-16 16:13:17.090858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.015 [2024-12-16 16:13:17.090970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.015 [2024-12-16 16:13:17.091077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.015 [2024-12-16 16:13:17.091078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 [2024-12-16 16:13:17.230672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 Malloc0 00:07:29.015 [2024-12-16 16:13:17.307657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=809499 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 809499 /var/tmp/bdevperf.sock 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 809499 ']' 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:29.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:29.015 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:29.015 { 00:07:29.015 "params": { 00:07:29.015 "name": "Nvme$subsystem", 00:07:29.015 "trtype": "$TEST_TRANSPORT", 00:07:29.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:29.016 "adrfam": "ipv4", 00:07:29.016 "trsvcid": "$NVMF_PORT", 00:07:29.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:29.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:29.016 "hdgst": ${hdgst:-false}, 00:07:29.016 "ddgst": ${ddgst:-false} 00:07:29.016 }, 00:07:29.016 "method": "bdev_nvme_attach_controller" 00:07:29.016 } 00:07:29.016 EOF 00:07:29.016 )") 00:07:29.016 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:29.016 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:29.016 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:29.016 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:29.016 "params": { 00:07:29.016 "name": "Nvme0", 00:07:29.016 "trtype": "tcp", 00:07:29.016 "traddr": "10.0.0.2", 00:07:29.016 "adrfam": "ipv4", 00:07:29.016 "trsvcid": "4420", 00:07:29.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.016 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:29.016 "hdgst": false, 00:07:29.016 "ddgst": false 00:07:29.016 }, 00:07:29.016 "method": "bdev_nvme_attach_controller" 00:07:29.016 }' 00:07:29.016 [2024-12-16 16:13:17.404053] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:29.016 [2024-12-16 16:13:17.404101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809499 ] 00:07:29.016 [2024-12-16 16:13:17.481059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.016 [2024-12-16 16:13:17.503413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.274 Running I/O for 10 seconds... 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=99 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 99 -ge 100 ']' 00:07:29.274 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.533 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.533 [2024-12-16 16:13:18.106492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.533 [2024-12-16 16:13:18.106531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.533 [2024-12-16 16:13:18.106547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.533 [2024-12-16 16:13:18.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.533 [2024-12-16 16:13:18.106563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.533 [2024-12-16 16:13:18.106570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.533 [2024-12-16 16:13:18.106579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.533 [2024-12-16 16:13:18.106585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.533 [2024-12-16 16:13:18.106593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.533 [2024-12-16 16:13:18.106600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.533 [2024-12-16 16:13:18.106608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.106989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.106997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.534 [2024-12-16 16:13:18.107141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.534 [2024-12-16 16:13:18.107148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.535 [2024-12-16 16:13:18.107483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:29.535 [2024-12-16 16:13:18.107490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ff50 is same with the state(6) to be set 00:07:29.535 [2024-12-16 16:13:18.108449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:29.535 task offset: 105344 on job bdev=Nvme0n1 fails 00:07:29.535 00:07:29.535 Latency(us) 00:07:29.535 [2024-12-16T15:13:18.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.535 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.535 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:29.535 Verification LBA range: start 0x0 length 0x400 00:07:29.535 Nvme0n1 : 0.40 1915.60 119.73 159.63 0.00 30014.51 1497.97 27213.04 00:07:29.535 [2024-12-16T15:13:18.144Z] =================================================================================================================== 00:07:29.535 [2024-12-16T15:13:18.144Z] Total : 1915.60 119.73 159.63 0.00 30014.51 1497.97 27213.04 00:07:29.535 [2024-12-16 16:13:18.110803] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.535 [2024-12-16 16:13:18.110824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98c490 (9): Bad file descriptor 00:07:29.535 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.535 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:29.535 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.535 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.535 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.535 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:29.794 [2024-12-16 16:13:18.163085] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 809499 00:07:30.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (809499) - No such process 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.727 { 00:07:30.727 "params": { 00:07:30.727 "name": "Nvme$subsystem", 00:07:30.727 "trtype": "$TEST_TRANSPORT", 00:07:30.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.727 "adrfam": "ipv4", 00:07:30.727 "trsvcid": "$NVMF_PORT", 00:07:30.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.727 "hdgst": ${hdgst:-false}, 00:07:30.727 "ddgst": ${ddgst:-false} 00:07:30.727 }, 00:07:30.727 "method": "bdev_nvme_attach_controller" 00:07:30.727 } 00:07:30.727 EOF 00:07:30.727 )") 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:30.727 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.727 "params": { 00:07:30.727 "name": "Nvme0", 00:07:30.727 "trtype": "tcp", 00:07:30.727 "traddr": "10.0.0.2", 00:07:30.727 "adrfam": "ipv4", 00:07:30.727 "trsvcid": "4420", 00:07:30.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:30.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:30.727 "hdgst": false, 00:07:30.727 "ddgst": false 00:07:30.727 }, 00:07:30.727 "method": "bdev_nvme_attach_controller" 00:07:30.727 }' 00:07:30.727 [2024-12-16 16:13:19.175725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:30.727 [2024-12-16 16:13:19.175772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809743 ] 00:07:30.727 [2024-12-16 16:13:19.251145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.727 [2024-12-16 16:13:19.272088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.985 Running I/O for 1 seconds... 00:07:31.917 1984.00 IOPS, 124.00 MiB/s 00:07:31.917 Latency(us) 00:07:31.917 [2024-12-16T15:13:20.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.917 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:31.917 Verification LBA range: start 0x0 length 0x400 00:07:31.917 Nvme0n1 : 1.01 2033.28 127.08 0.00 0.00 30948.04 4244.24 28711.01 00:07:31.917 [2024-12-16T15:13:20.526Z] =================================================================================================================== 00:07:31.917 [2024-12-16T15:13:20.526Z] Total : 2033.28 127.08 0.00 0.00 30948.04 4244.24 28711.01 00:07:32.175 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:32.175 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:32.175 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:32.175 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:32.175 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.176 rmmod nvme_tcp 00:07:32.176 rmmod nvme_fabrics 00:07:32.176 rmmod nvme_keyring 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 809447 ']' 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 809447 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 809447 ']' 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 809447 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 809447 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 809447' 00:07:32.176 killing process with pid 809447 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 809447 00:07:32.176 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 809447 00:07:32.435 [2024-12-16 16:13:20.877511] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.435 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.971 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.971 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:34.971 00:07:34.971 real 0m12.225s 00:07:34.971 user 0m18.922s 00:07:34.971 sys 0m5.625s 00:07:34.971 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.971 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.971 ************************************ 00:07:34.971 END TEST nvmf_host_management 00:07:34.971 ************************************ 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.971 ************************************ 00:07:34.971 START TEST nvmf_lvol 00:07:34.971 ************************************ 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:34.971 * Looking for test storage... 00:07:34.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:34.971 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.972 --rc genhtml_branch_coverage=1 00:07:34.972 --rc genhtml_function_coverage=1 00:07:34.972 --rc genhtml_legend=1 00:07:34.972 --rc geninfo_all_blocks=1 00:07:34.972 --rc geninfo_unexecuted_blocks=1 00:07:34.972 00:07:34.972 ' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.972 --rc genhtml_branch_coverage=1 00:07:34.972 --rc genhtml_function_coverage=1 00:07:34.972 --rc genhtml_legend=1 00:07:34.972 --rc geninfo_all_blocks=1 00:07:34.972 --rc geninfo_unexecuted_blocks=1 00:07:34.972 00:07:34.972 ' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.972 --rc genhtml_branch_coverage=1 00:07:34.972 --rc genhtml_function_coverage=1 00:07:34.972 --rc genhtml_legend=1 00:07:34.972 --rc geninfo_all_blocks=1 00:07:34.972 --rc geninfo_unexecuted_blocks=1 00:07:34.972 00:07:34.972 ' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.972 --rc genhtml_branch_coverage=1 00:07:34.972 --rc genhtml_function_coverage=1 00:07:34.972 --rc genhtml_legend=1 00:07:34.972 --rc geninfo_all_blocks=1 00:07:34.972 --rc geninfo_unexecuted_blocks=1 00:07:34.972 00:07:34.972 ' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.972 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:40.334 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:40.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:40.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.335 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:40.595 Found net devices under 0000:af:00.0: cvl_0_0 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:40.595 Found net devices under 0000:af:00.1: cvl_0_1 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.595 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:07:40.595 00:07:40.595 --- 10.0.0.2 ping statistics --- 00:07:40.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.595 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:40.595 00:07:40.595 --- 10.0.0.1 ping statistics --- 00:07:40.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.595 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.595 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=813618 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 813618 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 813618 ']' 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.855 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.855 [2024-12-16 16:13:29.297006] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:40.855 [2024-12-16 16:13:29.297051] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.855 [2024-12-16 16:13:29.375184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.855 [2024-12-16 16:13:29.398206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.855 [2024-12-16 16:13:29.398238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.855 [2024-12-16 16:13:29.398245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.855 [2024-12-16 16:13:29.398250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.855 [2024-12-16 16:13:29.398255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.855 [2024-12-16 16:13:29.399481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.855 [2024-12-16 16:13:29.399589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.855 [2024-12-16 16:13:29.399591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.114 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:41.114 [2024-12-16 16:13:29.692204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.373 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.373 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:41.373 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.633 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:41.633 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:41.891 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:42.151 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8547ba20-7321-441a-953d-7a3ee262de9e 00:07:42.151 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8547ba20-7321-441a-953d-7a3ee262de9e lvol 20 00:07:42.410 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=eb8ae3b2-d35b-499e-af11-fccbe3da4ddc 00:07:42.410 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.410 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eb8ae3b2-d35b-499e-af11-fccbe3da4ddc 00:07:42.669 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:42.929 [2024-12-16 16:13:31.359566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.929 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.188 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=813931 00:07:43.188 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:43.188 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:44.122 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot eb8ae3b2-d35b-499e-af11-fccbe3da4ddc MY_SNAPSHOT 00:07:44.381 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=34124509-a9a7-4bdf-b3fa-e6e8e242b5cd 00:07:44.381 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize eb8ae3b2-d35b-499e-af11-fccbe3da4ddc 30 00:07:44.639 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 34124509-a9a7-4bdf-b3fa-e6e8e242b5cd MY_CLONE 00:07:44.898 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9226f4b5-51fc-4bf1-9d66-ae3cc9dadf27 00:07:44.898 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9226f4b5-51fc-4bf1-9d66-ae3cc9dadf27 00:07:45.465 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 813931 00:07:53.578 Initializing NVMe Controllers 00:07:53.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:53.578 Controller IO queue size 128, less than required. 00:07:53.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:53.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:53.578 Initialization complete. Launching workers. 00:07:53.578 ======================================================== 00:07:53.578 Latency(us) 00:07:53.578 Device Information : IOPS MiB/s Average min max 00:07:53.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12283.07 47.98 10421.67 1324.74 60226.97 00:07:53.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12256.77 47.88 10441.46 3689.76 62867.36 00:07:53.578 ======================================================== 00:07:53.578 Total : 24539.84 95.86 10431.56 1324.74 62867.36 00:07:53.578 00:07:53.578 16:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:53.578 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb8ae3b2-d35b-499e-af11-fccbe3da4ddc 00:07:53.837 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8547ba20-7321-441a-953d-7a3ee262de9e 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.096 rmmod nvme_tcp 00:07:54.096 rmmod nvme_fabrics 00:07:54.096 rmmod nvme_keyring 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 813618 ']' 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 813618 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 813618 ']' 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 813618 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 813618 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 813618' 00:07:54.096 killing process with pid 813618 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 813618 00:07:54.096 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 813618 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.356 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.893 00:07:56.893 real 0m21.862s 00:07:56.893 user 1m2.797s 00:07:56.893 sys 0m7.630s 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.893 ************************************ 00:07:56.893 END TEST nvmf_lvol 00:07:56.893 ************************************ 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.893 ************************************ 00:07:56.893 START TEST nvmf_lvs_grow 00:07:56.893 ************************************ 00:07:56.893 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:56.893 * Looking for test storage... 00:07:56.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.893 --rc genhtml_branch_coverage=1 00:07:56.893 --rc genhtml_function_coverage=1 00:07:56.893 --rc genhtml_legend=1 00:07:56.893 --rc geninfo_all_blocks=1 00:07:56.893 --rc geninfo_unexecuted_blocks=1 00:07:56.893 00:07:56.893 ' 00:07:56.893 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.893 --rc genhtml_branch_coverage=1 00:07:56.893 --rc genhtml_function_coverage=1 00:07:56.894 --rc genhtml_legend=1 00:07:56.894 --rc geninfo_all_blocks=1 00:07:56.894 --rc geninfo_unexecuted_blocks=1 00:07:56.894 00:07:56.894 ' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.894 --rc genhtml_branch_coverage=1 00:07:56.894 --rc genhtml_function_coverage=1 00:07:56.894 --rc genhtml_legend=1 00:07:56.894 --rc geninfo_all_blocks=1 00:07:56.894 --rc geninfo_unexecuted_blocks=1 00:07:56.894 00:07:56.894 ' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.894 --rc genhtml_branch_coverage=1 00:07:56.894 --rc genhtml_function_coverage=1 00:07:56.894 --rc genhtml_legend=1 00:07:56.894 --rc geninfo_all_blocks=1 00:07:56.894 --rc geninfo_unexecuted_blocks=1 00:07:56.894 00:07:56.894 ' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.894 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.464 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.464 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.464 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.465 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:08:03.465 00:08:03.465 --- 10.0.0.2 ping statistics --- 00:08:03.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.465 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:03.465 00:08:03.465 --- 10.0.0.1 ping statistics --- 00:08:03.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.465 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=819412 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 819412 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 819412 ']' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.465 [2024-12-16 16:13:51.242629] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:03.465 [2024-12-16 16:13:51.242672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.465 [2024-12-16 16:13:51.321289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.465 [2024-12-16 16:13:51.342810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.465 [2024-12-16 16:13:51.342845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.465 [2024-12-16 16:13:51.342852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.465 [2024-12-16 16:13:51.342858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.465 [2024-12-16 16:13:51.342863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.465 [2024-12-16 16:13:51.343391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.465 [2024-12-16 16:13:51.635291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.465 ************************************ 00:08:03.465 START TEST lvs_grow_clean 00:08:03.465 ************************************ 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.465 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:03.724 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b941199e-dfcc-45dd-8647-e66a18e24893 00:08:03.724 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:03.724 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.724 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.724 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.724 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b941199e-dfcc-45dd-8647-e66a18e24893 lvol 150 00:08:03.982 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=140efa46-3871-4bcc-8816-59e941a8cc5d 00:08:03.982 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.982 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:04.240 [2024-12-16 16:13:52.648952] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:04.240 [2024-12-16 16:13:52.648996] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:04.240 true 00:08:04.240 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:04.240 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:04.499 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:04.499 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.499 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 140efa46-3871-4bcc-8816-59e941a8cc5d 00:08:04.758 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.017 [2024-12-16 16:13:53.383198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=819736 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 819736 /var/tmp/bdevperf.sock 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 819736 ']' 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.017 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.017 [2024-12-16 16:13:53.614451] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:05.017 [2024-12-16 16:13:53.614494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819736 ] 00:08:05.276 [2024-12-16 16:13:53.689254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.276 [2024-12-16 16:13:53.711679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.276 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.276 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:05.276 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.535 Nvme0n1 00:08:05.535 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:05.793 [ 00:08:05.793 { 00:08:05.793 "name": "Nvme0n1", 00:08:05.793 "aliases": [ 00:08:05.793 "140efa46-3871-4bcc-8816-59e941a8cc5d" 00:08:05.793 ], 00:08:05.793 "product_name": "NVMe disk", 00:08:05.793 "block_size": 4096, 00:08:05.793 "num_blocks": 38912, 00:08:05.793 "uuid": "140efa46-3871-4bcc-8816-59e941a8cc5d", 00:08:05.793 "numa_id": 1, 00:08:05.793 "assigned_rate_limits": { 00:08:05.793 "rw_ios_per_sec": 0, 00:08:05.793 "rw_mbytes_per_sec": 0, 00:08:05.793 "r_mbytes_per_sec": 0, 00:08:05.793 "w_mbytes_per_sec": 0 00:08:05.793 }, 00:08:05.793 "claimed": false, 00:08:05.793 "zoned": false, 00:08:05.793 "supported_io_types": { 00:08:05.793 "read": true, 00:08:05.793 "write": true, 00:08:05.793 "unmap": true, 00:08:05.793 "flush": true, 00:08:05.793 "reset": true, 00:08:05.793 "nvme_admin": true, 00:08:05.793 "nvme_io": true, 00:08:05.793 "nvme_io_md": false, 00:08:05.793 "write_zeroes": true, 00:08:05.793 "zcopy": false, 00:08:05.793 "get_zone_info": false, 00:08:05.793 "zone_management": false, 00:08:05.793 "zone_append": false, 00:08:05.793 "compare": true, 00:08:05.793 "compare_and_write": true, 00:08:05.793 "abort": true, 00:08:05.793 "seek_hole": false, 00:08:05.793 "seek_data": false, 00:08:05.793 "copy": true, 00:08:05.793 "nvme_iov_md": false 00:08:05.793 }, 00:08:05.793 "memory_domains": [ 00:08:05.793 { 00:08:05.793 "dma_device_id": "system", 00:08:05.793 "dma_device_type": 1 00:08:05.793 } 00:08:05.793 ], 00:08:05.793 "driver_specific": { 00:08:05.793 "nvme": [ 00:08:05.793 { 00:08:05.793 "trid": { 00:08:05.793 "trtype": "TCP", 00:08:05.793 "adrfam": "IPv4", 00:08:05.793 "traddr": "10.0.0.2", 00:08:05.793 "trsvcid": "4420", 00:08:05.793 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:05.793 }, 00:08:05.793 "ctrlr_data": { 00:08:05.794 "cntlid": 1, 00:08:05.794 "vendor_id": "0x8086", 00:08:05.794 "model_number": "SPDK bdev Controller", 00:08:05.794 "serial_number": "SPDK0", 00:08:05.794 "firmware_revision": "25.01", 00:08:05.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.794 "oacs": { 00:08:05.794 "security": 0, 00:08:05.794 "format": 0, 00:08:05.794 "firmware": 0, 00:08:05.794 "ns_manage": 0 00:08:05.794 }, 00:08:05.794 "multi_ctrlr": true, 00:08:05.794 "ana_reporting": false 00:08:05.794 }, 00:08:05.794 "vs": { 00:08:05.794 "nvme_version": "1.3" 00:08:05.794 }, 00:08:05.794 "ns_data": { 00:08:05.794 "id": 1, 00:08:05.794 "can_share": true 00:08:05.794 } 00:08:05.794 } 00:08:05.794 ], 00:08:05.794 "mp_policy": "active_passive" 00:08:05.794 } 00:08:05.794 } 00:08:05.794 ] 00:08:05.794 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=819911 00:08:05.794 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:05.794 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.794 Running I/O for 10 seconds... 00:08:07.170 Latency(us) 00:08:07.170 [2024-12-16T15:13:55.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.170 Nvme0n1 : 1.00 23445.00 91.58 0.00 0.00 0.00 0.00 0.00 00:08:07.170 [2024-12-16T15:13:55.779Z] =================================================================================================================== 00:08:07.170 [2024-12-16T15:13:55.779Z] Total : 23445.00 91.58 0.00 0.00 0.00 0.00 0.00 00:08:07.170 00:08:07.737 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:07.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.996 Nvme0n1 : 2.00 23566.50 92.06 0.00 0.00 0.00 0.00 0.00 00:08:07.996 [2024-12-16T15:13:56.605Z] =================================================================================================================== 00:08:07.996 [2024-12-16T15:13:56.605Z] Total : 23566.50 92.06 0.00 0.00 0.00 0.00 0.00 00:08:07.996 00:08:07.996 true 00:08:07.996 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:07.996 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.255 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:08.255 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:08.255 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 819911 00:08:08.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.822 Nvme0n1 : 3.00 23587.00 92.14 0.00 0.00 0.00 0.00 0.00 00:08:08.822 [2024-12-16T15:13:57.431Z] =================================================================================================================== 00:08:08.822 [2024-12-16T15:13:57.431Z] Total : 23587.00 92.14 0.00 0.00 0.00 0.00 0.00 00:08:08.822 00:08:10.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.198 Nvme0n1 : 4.00 23645.00 92.36 0.00 0.00 0.00 0.00 0.00 00:08:10.198 [2024-12-16T15:13:58.807Z] =================================================================================================================== 00:08:10.198 [2024-12-16T15:13:58.807Z] Total : 23645.00 92.36 0.00 0.00 0.00 0.00 0.00 00:08:10.198 00:08:10.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.765 Nvme0n1 : 5.00 23687.00 92.53 0.00 0.00 0.00 0.00 0.00 00:08:10.765 [2024-12-16T15:13:59.374Z] =================================================================================================================== 00:08:10.765 [2024-12-16T15:13:59.374Z] Total : 23687.00 92.53 0.00 0.00 0.00 0.00 0.00 00:08:10.765 00:08:12.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.142 Nvme0n1 : 6.00 23672.17 92.47 0.00 0.00 0.00 0.00 0.00 00:08:12.142 [2024-12-16T15:14:00.751Z] =================================================================================================================== 00:08:12.142 [2024-12-16T15:14:00.751Z] Total : 23672.17 92.47 0.00 0.00 0.00 0.00 0.00 00:08:12.142 00:08:13.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.078 Nvme0n1 : 7.00 23671.43 92.47 0.00 0.00 0.00 0.00 0.00 00:08:13.078 [2024-12-16T15:14:01.687Z] =================================================================================================================== 00:08:13.078 [2024-12-16T15:14:01.687Z] Total : 23671.43 92.47 0.00 0.00 0.00 0.00 0.00 00:08:13.078 00:08:14.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.015 Nvme0n1 : 8.00 23705.25 92.60 0.00 0.00 0.00 0.00 0.00 00:08:14.015 [2024-12-16T15:14:02.624Z] =================================================================================================================== 00:08:14.015 [2024-12-16T15:14:02.624Z] Total : 23705.25 92.60 0.00 0.00 0.00 0.00 0.00 00:08:14.015 00:08:14.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.951 Nvme0n1 : 9.00 23727.44 92.69 0.00 0.00 0.00 0.00 0.00 00:08:14.951 [2024-12-16T15:14:03.560Z] =================================================================================================================== 00:08:14.951 [2024-12-16T15:14:03.560Z] Total : 23727.44 92.69 0.00 0.00 0.00 0.00 0.00 00:08:14.951 00:08:15.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.887 Nvme0n1 : 10.00 23749.20 92.77 0.00 0.00 0.00 0.00 0.00 00:08:15.887 [2024-12-16T15:14:04.496Z] =================================================================================================================== 00:08:15.887 [2024-12-16T15:14:04.496Z] Total : 23749.20 92.77 0.00 0.00 0.00 0.00 0.00 00:08:15.887 00:08:15.887 00:08:15.887 Latency(us) 00:08:15.887 [2024-12-16T15:14:04.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.888 Nvme0n1 : 10.00 23754.77 92.79 0.00 0.00 5385.45 1825.65 11297.16 00:08:15.888 [2024-12-16T15:14:04.497Z] =================================================================================================================== 00:08:15.888 [2024-12-16T15:14:04.497Z] Total : 23754.77 92.79 0.00 0.00 5385.45 1825.65 11297.16 00:08:15.888 { 00:08:15.888 "results": [ 00:08:15.888 { 00:08:15.888 "job": "Nvme0n1", 00:08:15.888 "core_mask": "0x2", 00:08:15.888 "workload": "randwrite", 00:08:15.888 "status": "finished", 00:08:15.888 "queue_depth": 128, 00:08:15.888 "io_size": 4096, 00:08:15.888 "runtime": 10.003042, 00:08:15.888 "iops": 23754.773797810707, 00:08:15.888 "mibps": 92.79208514769807, 00:08:15.888 "io_failed": 0, 00:08:15.888 "io_timeout": 0, 00:08:15.888 "avg_latency_us": 5385.44925354207, 00:08:15.888 "min_latency_us": 1825.6457142857143, 00:08:15.888 "max_latency_us": 11297.158095238095 00:08:15.888 } 00:08:15.888 ], 00:08:15.888 "core_count": 1 00:08:15.888 } 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 819736 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 819736 ']' 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 819736 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 819736 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 819736' 00:08:15.888 killing process with pid 819736 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 819736 00:08:15.888 Received shutdown signal, test time was about 10.000000 seconds 00:08:15.888 00:08:15.888 Latency(us) 00:08:15.888 [2024-12-16T15:14:04.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.888 [2024-12-16T15:14:04.497Z] =================================================================================================================== 00:08:15.888 [2024-12-16T15:14:04.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:15.888 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 819736 00:08:16.147 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.405 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.405 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:16.405 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:16.676 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:16.676 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:16.676 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:16.942 [2024-12-16 16:14:05.338691] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:16.942 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:17.201 request: 00:08:17.201 { 00:08:17.201 "uuid": "b941199e-dfcc-45dd-8647-e66a18e24893", 00:08:17.201 "method": "bdev_lvol_get_lvstores", 00:08:17.201 "req_id": 1 00:08:17.201 } 00:08:17.201 Got JSON-RPC error response 00:08:17.201 response: 00:08:17.201 { 00:08:17.201 "code": -19, 00:08:17.201 "message": "No such device" 00:08:17.201 } 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.201 aio_bdev 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 140efa46-3871-4bcc-8816-59e941a8cc5d 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=140efa46-3871-4bcc-8816-59e941a8cc5d 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.201 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.459 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 140efa46-3871-4bcc-8816-59e941a8cc5d -t 2000 00:08:17.717 [ 00:08:17.717 { 00:08:17.717 "name": "140efa46-3871-4bcc-8816-59e941a8cc5d", 00:08:17.717 "aliases": [ 00:08:17.717 "lvs/lvol" 00:08:17.717 ], 00:08:17.717 "product_name": "Logical Volume", 00:08:17.717 "block_size": 4096, 00:08:17.717 "num_blocks": 38912, 00:08:17.717 "uuid": "140efa46-3871-4bcc-8816-59e941a8cc5d", 00:08:17.717 "assigned_rate_limits": { 00:08:17.717 "rw_ios_per_sec": 0, 00:08:17.717 "rw_mbytes_per_sec": 0, 00:08:17.717 "r_mbytes_per_sec": 0, 00:08:17.717 "w_mbytes_per_sec": 0 00:08:17.717 }, 00:08:17.717 "claimed": false, 00:08:17.717 "zoned": false, 00:08:17.717 "supported_io_types": { 00:08:17.717 "read": true, 00:08:17.717 "write": true, 00:08:17.717 "unmap": true, 00:08:17.717 "flush": false, 00:08:17.717 "reset": true, 00:08:17.717 "nvme_admin": false, 00:08:17.717 "nvme_io": false, 00:08:17.717 "nvme_io_md": false, 00:08:17.717 "write_zeroes": true, 00:08:17.717 "zcopy": false, 00:08:17.717 "get_zone_info": false, 00:08:17.717 "zone_management": false, 00:08:17.717 "zone_append": false, 00:08:17.717 "compare": false, 00:08:17.717 "compare_and_write": false, 00:08:17.717 "abort": false, 00:08:17.717 "seek_hole": true, 00:08:17.717 "seek_data": true, 00:08:17.717 "copy": false, 00:08:17.717 "nvme_iov_md": false 00:08:17.717 }, 00:08:17.717 "driver_specific": { 00:08:17.717 "lvol": { 00:08:17.717 "lvol_store_uuid": "b941199e-dfcc-45dd-8647-e66a18e24893", 00:08:17.717 "base_bdev": "aio_bdev", 00:08:17.717 "thin_provision": false, 00:08:17.717 "num_allocated_clusters": 38, 00:08:17.717 "snapshot": false, 00:08:17.717 "clone": false, 00:08:17.717 "esnap_clone": false 00:08:17.717 } 00:08:17.717 } 00:08:17.717 } 00:08:17.717 ] 00:08:17.717 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:17.717 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:17.717 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:17.717 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:17.717 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:17.717 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:17.975 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:17.975 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 140efa46-3871-4bcc-8816-59e941a8cc5d 00:08:18.233 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b941199e-dfcc-45dd-8647-e66a18e24893 00:08:18.492 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.492 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.751 00:08:18.751 real 0m15.409s 00:08:18.751 user 0m14.967s 00:08:18.751 sys 0m1.456s 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.751 ************************************ 00:08:18.751 END TEST lvs_grow_clean 00:08:18.751 ************************************ 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.751 ************************************ 00:08:18.751 START TEST lvs_grow_dirty 00:08:18.751 ************************************ 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.751 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.010 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:19.010 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.010 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:19.010 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:19.010 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.268 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.268 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.268 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6567d68-b6de-4b1f-a986-edafb182e35c lvol 150 00:08:19.527 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:19.527 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.527 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:19.527 [2024-12-16 16:14:08.129900] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:19.527 [2024-12-16 16:14:08.129946] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:19.527 true 00:08:19.785 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:19.785 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:19.785 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:19.785 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.044 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:20.302 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:20.302 [2024-12-16 16:14:08.880142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.302 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=822434 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 822434 /var/tmp/bdevperf.sock 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 822434 ']' 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.561 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.561 [2024-12-16 16:14:09.118604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:20.561 [2024-12-16 16:14:09.118655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822434 ] 00:08:20.820 [2024-12-16 16:14:09.190296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.820 [2024-12-16 16:14:09.212717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.820 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.820 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:20.820 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:21.388 Nvme0n1 00:08:21.388 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:21.388 [ 00:08:21.388 { 00:08:21.388 "name": "Nvme0n1", 00:08:21.388 "aliases": [ 00:08:21.388 "1e326b56-d6cc-42f7-bbe9-f8bced20f4fe" 00:08:21.388 ], 00:08:21.388 "product_name": "NVMe disk", 00:08:21.388 "block_size": 4096, 00:08:21.388 "num_blocks": 38912, 00:08:21.388 "uuid": "1e326b56-d6cc-42f7-bbe9-f8bced20f4fe", 00:08:21.388 "numa_id": 1, 00:08:21.388 "assigned_rate_limits": { 00:08:21.388 "rw_ios_per_sec": 0, 00:08:21.388 "rw_mbytes_per_sec": 0, 00:08:21.388 "r_mbytes_per_sec": 0, 00:08:21.388 "w_mbytes_per_sec": 0 00:08:21.388 }, 00:08:21.388 "claimed": false, 00:08:21.388 "zoned": false, 00:08:21.388 "supported_io_types": { 00:08:21.388 "read": true, 00:08:21.388 "write": true, 00:08:21.388 "unmap": true, 00:08:21.388 "flush": true, 00:08:21.388 "reset": true, 00:08:21.388 "nvme_admin": true, 00:08:21.388 "nvme_io": true, 00:08:21.388 "nvme_io_md": false, 00:08:21.388 "write_zeroes": true, 00:08:21.388 "zcopy": false, 00:08:21.388 "get_zone_info": false, 00:08:21.388 "zone_management": false, 00:08:21.388 "zone_append": false, 00:08:21.388 "compare": true, 00:08:21.388 "compare_and_write": true, 00:08:21.388 "abort": true, 00:08:21.388 "seek_hole": false, 00:08:21.388 "seek_data": false, 00:08:21.388 "copy": true, 00:08:21.388 "nvme_iov_md": false 00:08:21.388 }, 00:08:21.388 "memory_domains": [ 00:08:21.388 { 00:08:21.388 "dma_device_id": "system", 00:08:21.388 "dma_device_type": 1 00:08:21.388 } 00:08:21.388 ], 00:08:21.388 "driver_specific": { 00:08:21.388 "nvme": [ 00:08:21.388 { 00:08:21.388 "trid": { 00:08:21.388 "trtype": "TCP", 00:08:21.388 "adrfam": "IPv4", 00:08:21.388 "traddr": "10.0.0.2", 00:08:21.388 "trsvcid": "4420", 00:08:21.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:21.388 }, 00:08:21.388 "ctrlr_data": { 00:08:21.388 "cntlid": 1, 00:08:21.388 "vendor_id": "0x8086", 00:08:21.388 "model_number": "SPDK bdev Controller", 00:08:21.388 "serial_number": "SPDK0", 00:08:21.388 "firmware_revision": "25.01", 00:08:21.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:21.388 "oacs": { 00:08:21.388 "security": 0, 00:08:21.388 "format": 0, 00:08:21.388 "firmware": 0, 00:08:21.388 "ns_manage": 0 00:08:21.388 }, 00:08:21.388 "multi_ctrlr": true, 00:08:21.388 "ana_reporting": false 00:08:21.388 }, 00:08:21.388 "vs": { 00:08:21.388 "nvme_version": "1.3" 00:08:21.388 }, 00:08:21.388 "ns_data": { 00:08:21.388 "id": 1, 00:08:21.388 "can_share": true 00:08:21.388 } 00:08:21.388 } 00:08:21.388 ], 00:08:21.388 "mp_policy": "active_passive" 00:08:21.388 } 00:08:21.388 } 00:08:21.388 ] 00:08:21.388 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=822531 00:08:21.388 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:21.388 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:21.647 Running I/O for 10 seconds... 00:08:22.583 Latency(us) 00:08:22.583 [2024-12-16T15:14:11.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.583 Nvme0n1 : 1.00 23379.00 91.32 0.00 0.00 0.00 0.00 0.00 00:08:22.583 [2024-12-16T15:14:11.192Z] =================================================================================================================== 00:08:22.583 [2024-12-16T15:14:11.192Z] Total : 23379.00 91.32 0.00 0.00 0.00 0.00 0.00 00:08:22.583 00:08:23.519 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:23.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.519 Nvme0n1 : 2.00 23540.50 91.96 0.00 0.00 0.00 0.00 0.00 00:08:23.519 [2024-12-16T15:14:12.128Z] =================================================================================================================== 00:08:23.519 [2024-12-16T15:14:12.128Z] Total : 23540.50 91.96 0.00 0.00 0.00 0.00 0.00 00:08:23.519 00:08:23.777 true 00:08:23.777 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:23.777 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:23.777 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:23.777 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:23.777 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 822531 00:08:24.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.714 Nvme0n1 : 3.00 23605.00 92.21 0.00 0.00 0.00 0.00 0.00 00:08:24.714 [2024-12-16T15:14:13.323Z] =================================================================================================================== 00:08:24.714 [2024-12-16T15:14:13.323Z] Total : 23605.00 92.21 0.00 0.00 0.00 0.00 0.00 00:08:24.714 00:08:25.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.650 Nvme0n1 : 4.00 23653.75 92.40 0.00 0.00 0.00 0.00 0.00 00:08:25.650 [2024-12-16T15:14:14.259Z] =================================================================================================================== 00:08:25.650 [2024-12-16T15:14:14.259Z] Total : 23653.75 92.40 0.00 0.00 0.00 0.00 0.00 00:08:25.650 00:08:26.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.592 Nvme0n1 : 5.00 23712.40 92.63 0.00 0.00 0.00 0.00 0.00 00:08:26.592 [2024-12-16T15:14:15.201Z] =================================================================================================================== 00:08:26.592 [2024-12-16T15:14:15.201Z] Total : 23712.40 92.63 0.00 0.00 0.00 0.00 0.00 00:08:26.592 00:08:27.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.528 Nvme0n1 : 6.00 23760.83 92.82 0.00 0.00 0.00 0.00 0.00 00:08:27.528 [2024-12-16T15:14:16.137Z] =================================================================================================================== 00:08:27.528 [2024-12-16T15:14:16.137Z] Total : 23760.83 92.82 0.00 0.00 0.00 0.00 0.00 00:08:27.528 00:08:28.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.463 Nvme0n1 : 7.00 23795.86 92.95 0.00 0.00 0.00 0.00 0.00 00:08:28.463 [2024-12-16T15:14:17.072Z] =================================================================================================================== 00:08:28.463 [2024-12-16T15:14:17.072Z] Total : 23795.86 92.95 0.00 0.00 0.00 0.00 0.00 00:08:28.463 00:08:29.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.840 Nvme0n1 : 8.00 23815.00 93.03 0.00 0.00 0.00 0.00 0.00 00:08:29.840 [2024-12-16T15:14:18.449Z] =================================================================================================================== 00:08:29.840 [2024-12-16T15:14:18.449Z] Total : 23815.00 93.03 0.00 0.00 0.00 0.00 0.00 00:08:29.840 00:08:30.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.775 Nvme0n1 : 9.00 23829.67 93.08 0.00 0.00 0.00 0.00 0.00 00:08:30.775 [2024-12-16T15:14:19.384Z] =================================================================================================================== 00:08:30.775 [2024-12-16T15:14:19.384Z] Total : 23829.67 93.08 0.00 0.00 0.00 0.00 0.00 00:08:30.775 00:08:31.711 00:08:31.711 Latency(us) 00:08:31.711 [2024-12-16T15:14:20.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.711 Nvme0n1 : 10.00 23838.37 93.12 0.00 0.00 5366.38 3105.16 10797.84 00:08:31.711 [2024-12-16T15:14:20.320Z] =================================================================================================================== 00:08:31.711 [2024-12-16T15:14:20.320Z] Total : 23838.37 93.12 0.00 0.00 5366.38 3105.16 10797.84 00:08:31.711 { 00:08:31.711 "results": [ 00:08:31.711 { 00:08:31.711 "job": "Nvme0n1", 00:08:31.711 "core_mask": "0x2", 00:08:31.711 "workload": "randwrite", 00:08:31.711 "status": "finished", 00:08:31.711 "queue_depth": 128, 00:08:31.711 "io_size": 4096, 00:08:31.711 "runtime": 10.001062, 00:08:31.711 "iops": 23838.368365279606, 00:08:31.711 "mibps": 93.11862642687346, 00:08:31.711 "io_failed": 0, 00:08:31.711 "io_timeout": 0, 00:08:31.711 "avg_latency_us": 5366.3848242226395, 00:08:31.711 "min_latency_us": 3105.158095238095, 00:08:31.711 "max_latency_us": 10797.83619047619 00:08:31.711 } 00:08:31.711 ], 00:08:31.711 "core_count": 1 00:08:31.711 } 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 822434 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 822434 ']' 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 822434 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 822434 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 822434' 00:08:31.711 killing process with pid 822434 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 822434 00:08:31.711 Received shutdown signal, test time was about 10.000000 seconds 00:08:31.711 00:08:31.711 Latency(us) 00:08:31.711 [2024-12-16T15:14:20.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.711 [2024-12-16T15:14:20.320Z] =================================================================================================================== 00:08:31.711 [2024-12-16T15:14:20.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 822434 00:08:31.711 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.970 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.228 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:32.228 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 819412 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 819412 00:08:32.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 819412 Killed "${NVMF_APP[@]}" "$@" 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=824423 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 824423 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 824423 ']' 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.487 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.487 [2024-12-16 16:14:21.010722] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:32.487 [2024-12-16 16:14:21.010766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.487 [2024-12-16 16:14:21.090098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.752 [2024-12-16 16:14:21.111549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.752 [2024-12-16 16:14:21.111581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.752 [2024-12-16 16:14:21.111588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.752 [2024-12-16 16:14:21.111594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.752 [2024-12-16 16:14:21.111599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.752 [2024-12-16 16:14:21.112129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.752 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.752 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:32.753 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.753 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.753 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.753 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.753 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.027 [2024-12-16 16:14:21.405101] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:33.027 [2024-12-16 16:14:21.405193] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:33.027 [2024-12-16 16:14:21.405217] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.027 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.028 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:33.028 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e326b56-d6cc-42f7-bbe9-f8bced20f4fe -t 2000 00:08:33.308 [ 00:08:33.308 { 00:08:33.308 "name": "1e326b56-d6cc-42f7-bbe9-f8bced20f4fe", 00:08:33.308 "aliases": [ 00:08:33.308 "lvs/lvol" 00:08:33.308 ], 00:08:33.308 "product_name": "Logical Volume", 00:08:33.308 "block_size": 4096, 00:08:33.308 "num_blocks": 38912, 00:08:33.308 "uuid": "1e326b56-d6cc-42f7-bbe9-f8bced20f4fe", 00:08:33.308 "assigned_rate_limits": { 00:08:33.308 "rw_ios_per_sec": 0, 00:08:33.308 "rw_mbytes_per_sec": 0, 00:08:33.308 "r_mbytes_per_sec": 0, 00:08:33.308 "w_mbytes_per_sec": 0 00:08:33.308 }, 00:08:33.308 "claimed": false, 00:08:33.308 "zoned": false, 00:08:33.308 "supported_io_types": { 00:08:33.308 "read": true, 00:08:33.308 "write": true, 00:08:33.308 "unmap": true, 00:08:33.308 "flush": false, 00:08:33.308 "reset": true, 00:08:33.308 "nvme_admin": false, 00:08:33.308 "nvme_io": false, 00:08:33.308 "nvme_io_md": false, 00:08:33.308 "write_zeroes": true, 00:08:33.308 "zcopy": false, 00:08:33.308 "get_zone_info": false, 00:08:33.308 "zone_management": false, 00:08:33.308 "zone_append": false, 00:08:33.308 "compare": false, 00:08:33.308 "compare_and_write": false, 00:08:33.308 "abort": false, 00:08:33.308 "seek_hole": true, 00:08:33.308 "seek_data": true, 00:08:33.308 "copy": false, 00:08:33.308 "nvme_iov_md": false 00:08:33.308 }, 00:08:33.308 "driver_specific": { 00:08:33.308 "lvol": { 00:08:33.308 "lvol_store_uuid": "d6567d68-b6de-4b1f-a986-edafb182e35c", 00:08:33.308 "base_bdev": "aio_bdev", 00:08:33.308 "thin_provision": false, 00:08:33.308 "num_allocated_clusters": 38, 00:08:33.308 "snapshot": false, 00:08:33.308 "clone": false, 00:08:33.308 "esnap_clone": false 00:08:33.308 } 00:08:33.308 } 00:08:33.308 } 00:08:33.308 ] 00:08:33.308 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:33.308 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:33.308 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:33.608 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:33.608 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:33.608 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:33.924 [2024-12-16 16:14:22.370171] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:33.924 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:34.187 request: 00:08:34.187 { 00:08:34.187 "uuid": "d6567d68-b6de-4b1f-a986-edafb182e35c", 00:08:34.187 "method": "bdev_lvol_get_lvstores", 00:08:34.187 "req_id": 1 00:08:34.187 } 00:08:34.187 Got JSON-RPC error response 00:08:34.187 response: 00:08:34.187 { 00:08:34.187 "code": -19, 00:08:34.187 "message": "No such device" 00:08:34.187 } 00:08:34.187 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:34.187 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.187 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.187 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.187 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:34.445 aio_bdev 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:34.445 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e326b56-d6cc-42f7-bbe9-f8bced20f4fe -t 2000 00:08:34.703 [ 00:08:34.703 { 00:08:34.703 "name": "1e326b56-d6cc-42f7-bbe9-f8bced20f4fe", 00:08:34.703 "aliases": [ 00:08:34.703 "lvs/lvol" 00:08:34.703 ], 00:08:34.703 "product_name": "Logical Volume", 00:08:34.703 "block_size": 4096, 00:08:34.703 "num_blocks": 38912, 00:08:34.703 "uuid": "1e326b56-d6cc-42f7-bbe9-f8bced20f4fe", 00:08:34.703 "assigned_rate_limits": { 00:08:34.703 "rw_ios_per_sec": 0, 00:08:34.703 "rw_mbytes_per_sec": 0, 00:08:34.703 "r_mbytes_per_sec": 0, 00:08:34.703 "w_mbytes_per_sec": 0 00:08:34.703 }, 00:08:34.703 "claimed": false, 00:08:34.703 "zoned": false, 00:08:34.703 "supported_io_types": { 00:08:34.703 "read": true, 00:08:34.703 "write": true, 00:08:34.703 "unmap": true, 00:08:34.703 "flush": false, 00:08:34.703 "reset": true, 00:08:34.703 "nvme_admin": false, 00:08:34.703 "nvme_io": false, 00:08:34.703 "nvme_io_md": false, 00:08:34.703 "write_zeroes": true, 00:08:34.703 "zcopy": false, 00:08:34.703 "get_zone_info": false, 00:08:34.703 "zone_management": false, 00:08:34.703 "zone_append": false, 00:08:34.703 "compare": false, 00:08:34.703 "compare_and_write": false, 00:08:34.703 "abort": false, 00:08:34.703 "seek_hole": true, 00:08:34.703 "seek_data": true, 00:08:34.703 "copy": false, 00:08:34.703 "nvme_iov_md": false 00:08:34.703 }, 00:08:34.703 "driver_specific": { 00:08:34.703 "lvol": { 00:08:34.703 "lvol_store_uuid": "d6567d68-b6de-4b1f-a986-edafb182e35c", 00:08:34.703 "base_bdev": "aio_bdev", 00:08:34.703 "thin_provision": false, 00:08:34.703 "num_allocated_clusters": 38, 00:08:34.703 "snapshot": false, 00:08:34.703 "clone": false, 00:08:34.703 "esnap_clone": false 00:08:34.703 } 00:08:34.703 } 00:08:34.703 } 00:08:34.703 ] 00:08:34.703 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:34.703 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:34.703 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:34.962 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:34.962 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:34.962 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:34.962 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:34.962 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e326b56-d6cc-42f7-bbe9-f8bced20f4fe 00:08:35.221 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6567d68-b6de-4b1f-a986-edafb182e35c 00:08:35.479 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:35.738 00:08:35.738 real 0m17.026s 00:08:35.738 user 0m43.634s 00:08:35.738 sys 0m3.848s 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.738 ************************************ 00:08:35.738 END TEST lvs_grow_dirty 00:08:35.738 ************************************ 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:35.738 nvmf_trace.0 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.738 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.738 rmmod nvme_tcp 00:08:35.738 rmmod nvme_fabrics 00:08:35.738 rmmod nvme_keyring 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 824423 ']' 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 824423 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 824423 ']' 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 824423 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 824423 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 824423' 00:08:35.997 killing process with pid 824423 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 824423 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 824423 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.997 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.534 00:08:38.534 real 0m41.647s 00:08:38.534 user 1m4.177s 00:08:38.534 sys 0m10.264s 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.534 ************************************ 00:08:38.534 END TEST nvmf_lvs_grow 00:08:38.534 ************************************ 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.534 ************************************ 00:08:38.534 START TEST nvmf_bdev_io_wait 00:08:38.534 ************************************ 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:38.534 * Looking for test storage... 00:08:38.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.534 --rc genhtml_function_coverage=1 00:08:38.534 --rc genhtml_legend=1 00:08:38.534 --rc geninfo_all_blocks=1 00:08:38.534 --rc geninfo_unexecuted_blocks=1 00:08:38.534 00:08:38.534 ' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.534 --rc genhtml_function_coverage=1 00:08:38.534 --rc genhtml_legend=1 00:08:38.534 --rc geninfo_all_blocks=1 00:08:38.534 --rc geninfo_unexecuted_blocks=1 00:08:38.534 00:08:38.534 ' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.534 --rc genhtml_function_coverage=1 00:08:38.534 --rc genhtml_legend=1 00:08:38.534 --rc geninfo_all_blocks=1 00:08:38.534 --rc geninfo_unexecuted_blocks=1 00:08:38.534 00:08:38.534 ' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.534 --rc genhtml_branch_coverage=1 00:08:38.534 --rc genhtml_function_coverage=1 00:08:38.534 --rc genhtml_legend=1 00:08:38.534 --rc geninfo_all_blocks=1 00:08:38.534 --rc geninfo_unexecuted_blocks=1 00:08:38.534 00:08:38.534 ' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.534 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.535 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:45.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:45.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:45.106 Found net devices under 0000:af:00.0: cvl_0_0 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:45.106 Found net devices under 0000:af:00.1: cvl_0_1 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.106 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:08:45.107 00:08:45.107 --- 10.0.0.2 ping statistics --- 00:08:45.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.107 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:45.107 00:08:45.107 --- 10.0.0.1 ping statistics --- 00:08:45.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.107 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=828458 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 828458 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 828458 ']' 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.107 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 [2024-12-16 16:14:32.913972] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:45.107 [2024-12-16 16:14:32.914020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.107 [2024-12-16 16:14:32.993565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.107 [2024-12-16 16:14:33.017936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.107 [2024-12-16 16:14:33.017976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.107 [2024-12-16 16:14:33.017983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.107 [2024-12-16 16:14:33.017988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.107 [2024-12-16 16:14:33.017993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.107 [2024-12-16 16:14:33.019346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.107 [2024-12-16 16:14:33.019453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.107 [2024-12-16 16:14:33.019561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.107 [2024-12-16 16:14:33.019562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 [2024-12-16 16:14:33.166660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 Malloc0 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.107 [2024-12-16 16:14:33.209666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=828580 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=828582 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=828585 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=828587 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.107 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.108 { 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme$subsystem", 00:08:45.108 "trtype": "$TEST_TRANSPORT", 00:08:45.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "$NVMF_PORT", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.108 "hdgst": ${hdgst:-false}, 00:08:45.108 "ddgst": ${ddgst:-false} 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 } 00:08:45.108 EOF 00:08:45.108 )") 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.108 { 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme$subsystem", 00:08:45.108 "trtype": "$TEST_TRANSPORT", 00:08:45.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "$NVMF_PORT", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.108 "hdgst": ${hdgst:-false}, 00:08:45.108 "ddgst": ${ddgst:-false} 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 } 00:08:45.108 EOF 00:08:45.108 )") 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.108 { 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme$subsystem", 00:08:45.108 "trtype": "$TEST_TRANSPORT", 00:08:45.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "$NVMF_PORT", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.108 "hdgst": ${hdgst:-false}, 00:08:45.108 "ddgst": ${ddgst:-false} 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 } 00:08:45.108 EOF 00:08:45.108 )") 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.108 { 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme$subsystem", 00:08:45.108 "trtype": "$TEST_TRANSPORT", 00:08:45.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "$NVMF_PORT", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.108 "hdgst": ${hdgst:-false}, 00:08:45.108 "ddgst": ${ddgst:-false} 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 } 00:08:45.108 EOF 00:08:45.108 )") 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 828580 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme1", 00:08:45.108 "trtype": "tcp", 00:08:45.108 "traddr": "10.0.0.2", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "4420", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.108 "hdgst": false, 00:08:45.108 "ddgst": false 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 }' 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme1", 00:08:45.108 "trtype": "tcp", 00:08:45.108 "traddr": "10.0.0.2", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "4420", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.108 "hdgst": false, 00:08:45.108 "ddgst": false 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 }' 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme1", 00:08:45.108 "trtype": "tcp", 00:08:45.108 "traddr": "10.0.0.2", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "4420", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.108 "hdgst": false, 00:08:45.108 "ddgst": false 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 }' 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.108 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.108 "params": { 00:08:45.108 "name": "Nvme1", 00:08:45.108 "trtype": "tcp", 00:08:45.108 "traddr": "10.0.0.2", 00:08:45.108 "adrfam": "ipv4", 00:08:45.108 "trsvcid": "4420", 00:08:45.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.108 "hdgst": false, 00:08:45.108 "ddgst": false 00:08:45.108 }, 00:08:45.108 "method": "bdev_nvme_attach_controller" 00:08:45.108 }' 00:08:45.108 [2024-12-16 16:14:33.261628] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:45.108 [2024-12-16 16:14:33.261630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:45.108 [2024-12-16 16:14:33.261670] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-16 16:14:33.261670] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:45.108 --proc-type=auto ] 00:08:45.108 [2024-12-16 16:14:33.263830] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:45.108 [2024-12-16 16:14:33.263841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:45.108 [2024-12-16 16:14:33.263880] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-12-16 16:14:33.263883] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:08:45.108 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:45.108 [2024-12-16 16:14:33.464888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.108 [2024-12-16 16:14:33.486425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:45.108 [2024-12-16 16:14:33.517610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.108 [2024-12-16 16:14:33.532599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:45.108 [2024-12-16 16:14:33.617298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.108 [2024-12-16 16:14:33.640791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:45.108 [2024-12-16 16:14:33.680261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.108 [2024-12-16 16:14:33.696128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:45.366 Running I/O for 1 seconds... 00:08:45.366 Running I/O for 1 seconds... 00:08:45.366 Running I/O for 1 seconds... 00:08:45.623 Running I/O for 1 seconds... 00:08:46.557 243792.00 IOPS, 952.31 MiB/s 00:08:46.557 Latency(us) 00:08:46.557 [2024-12-16T15:14:35.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.558 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:46.558 Nvme1n1 : 1.00 243422.69 950.87 0.00 0.00 523.49 220.40 1497.97 00:08:46.558 [2024-12-16T15:14:35.167Z] =================================================================================================================== 00:08:46.558 [2024-12-16T15:14:35.167Z] Total : 243422.69 950.87 0.00 0.00 523.49 220.40 1497.97 00:08:46.558 11496.00 IOPS, 44.91 MiB/s [2024-12-16T15:14:35.167Z] 11184.00 IOPS, 43.69 MiB/s 00:08:46.558 Latency(us) 00:08:46.558 [2024-12-16T15:14:35.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.558 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:46.558 Nvme1n1 : 1.01 11245.59 43.93 0.00 0.00 11344.76 4993.22 20472.20 00:08:46.558 [2024-12-16T15:14:35.167Z] =================================================================================================================== 00:08:46.558 [2024-12-16T15:14:35.167Z] Total : 11245.59 43.93 0.00 0.00 11344.76 4993.22 20472.20 00:08:46.558 00:08:46.558 Latency(us) 00:08:46.559 [2024-12-16T15:14:35.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.559 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:46.559 Nvme1n1 : 1.01 11542.47 45.09 0.00 0.00 11047.91 6241.52 19972.88 00:08:46.559 [2024-12-16T15:14:35.168Z] =================================================================================================================== 00:08:46.559 [2024-12-16T15:14:35.168Z] Total : 11542.47 45.09 0.00 0.00 11047.91 6241.52 19972.88 00:08:46.559 10267.00 IOPS, 40.11 MiB/s 00:08:46.559 Latency(us) 00:08:46.559 [2024-12-16T15:14:35.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.559 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:46.559 Nvme1n1 : 1.01 10345.73 40.41 0.00 0.00 12335.75 4213.03 25715.08 00:08:46.559 [2024-12-16T15:14:35.169Z] =================================================================================================================== 00:08:46.560 [2024-12-16T15:14:35.169Z] Total : 10345.73 40.41 0.00 0.00 12335.75 4213.03 25715.08 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 828582 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 828585 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 828587 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.560 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.561 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.561 rmmod nvme_tcp 00:08:46.821 rmmod nvme_fabrics 00:08:46.821 rmmod nvme_keyring 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 828458 ']' 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 828458 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 828458 ']' 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 828458 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 828458 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 828458' 00:08:46.821 killing process with pid 828458 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 828458 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 828458 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.821 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.355 00:08:49.355 real 0m10.776s 00:08:49.355 user 0m16.411s 00:08:49.355 sys 0m6.202s 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 ************************************ 00:08:49.355 END TEST nvmf_bdev_io_wait 00:08:49.355 ************************************ 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 ************************************ 00:08:49.355 START TEST nvmf_queue_depth 00:08:49.355 ************************************ 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:49.355 * Looking for test storage... 00:08:49.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:49.355 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.356 --rc genhtml_branch_coverage=1 00:08:49.356 --rc genhtml_function_coverage=1 00:08:49.356 --rc genhtml_legend=1 00:08:49.356 --rc geninfo_all_blocks=1 00:08:49.356 --rc geninfo_unexecuted_blocks=1 00:08:49.356 00:08:49.356 ' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.356 --rc genhtml_branch_coverage=1 00:08:49.356 --rc genhtml_function_coverage=1 00:08:49.356 --rc genhtml_legend=1 00:08:49.356 --rc geninfo_all_blocks=1 00:08:49.356 --rc geninfo_unexecuted_blocks=1 00:08:49.356 00:08:49.356 ' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.356 --rc genhtml_branch_coverage=1 00:08:49.356 --rc genhtml_function_coverage=1 00:08:49.356 --rc genhtml_legend=1 00:08:49.356 --rc geninfo_all_blocks=1 00:08:49.356 --rc geninfo_unexecuted_blocks=1 00:08:49.356 00:08:49.356 ' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.356 --rc genhtml_branch_coverage=1 00:08:49.356 --rc genhtml_function_coverage=1 00:08:49.356 --rc genhtml_legend=1 00:08:49.356 --rc geninfo_all_blocks=1 00:08:49.356 --rc geninfo_unexecuted_blocks=1 00:08:49.356 00:08:49.356 ' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.356 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.357 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:49.357 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:49.357 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.357 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:55.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:55.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:55.927 Found net devices under 0000:af:00.0: cvl_0_0 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.927 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:55.927 Found net devices under 0000:af:00.1: cvl_0_1 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:08:55.928 00:08:55.928 --- 10.0.0.2 ping statistics --- 00:08:55.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.928 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:55.928 00:08:55.928 --- 10.0.0.1 ping statistics --- 00:08:55.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.928 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=832431 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 832431 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 832431 ']' 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.928 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 [2024-12-16 16:14:43.867586] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:55.928 [2024-12-16 16:14:43.867629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.928 [2024-12-16 16:14:43.935498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.928 [2024-12-16 16:14:43.955881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.928 [2024-12-16 16:14:43.955917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.928 [2024-12-16 16:14:43.955923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.928 [2024-12-16 16:14:43.955929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.928 [2024-12-16 16:14:43.955934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.928 [2024-12-16 16:14:43.956442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 [2024-12-16 16:14:44.098177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 Malloc0 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.928 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.929 [2024-12-16 16:14:44.148289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=832529 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 832529 /var/tmp/bdevperf.sock 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 832529 ']' 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.929 [2024-12-16 16:14:44.195941] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:55.929 [2024-12-16 16:14:44.195979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832529 ] 00:08:55.929 [2024-12-16 16:14:44.270445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.929 [2024-12-16 16:14:44.293257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.929 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.188 NVMe0n1 00:08:56.188 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.188 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:56.188 Running I/O for 10 seconds... 00:08:58.060 11941.00 IOPS, 46.64 MiB/s [2024-12-16T15:14:48.047Z] 12147.00 IOPS, 47.45 MiB/s [2024-12-16T15:14:48.983Z] 12262.00 IOPS, 47.90 MiB/s [2024-12-16T15:14:49.919Z] 12310.75 IOPS, 48.09 MiB/s [2024-12-16T15:14:50.855Z] 12310.00 IOPS, 48.09 MiB/s [2024-12-16T15:14:51.791Z] 12329.17 IOPS, 48.16 MiB/s [2024-12-16T15:14:52.729Z] 12377.57 IOPS, 48.35 MiB/s [2024-12-16T15:14:53.665Z] 12392.50 IOPS, 48.41 MiB/s [2024-12-16T15:14:55.043Z] 12404.00 IOPS, 48.45 MiB/s [2024-12-16T15:14:55.043Z] 12451.50 IOPS, 48.64 MiB/s 00:09:06.434 Latency(us) 00:09:06.434 [2024-12-16T15:14:55.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:06.434 Verification LBA range: start 0x0 length 0x4000 00:09:06.434 NVMe0n1 : 10.06 12471.44 48.72 0.00 0.00 81833.93 19099.06 54176.43 00:09:06.434 [2024-12-16T15:14:55.043Z] =================================================================================================================== 00:09:06.434 [2024-12-16T15:14:55.043Z] Total : 12471.44 48.72 0.00 0.00 81833.93 19099.06 54176.43 00:09:06.434 { 00:09:06.434 "results": [ 00:09:06.434 { 00:09:06.434 "job": "NVMe0n1", 00:09:06.434 "core_mask": "0x1", 00:09:06.434 "workload": "verify", 00:09:06.434 "status": "finished", 00:09:06.434 "verify_range": { 00:09:06.434 "start": 0, 00:09:06.434 "length": 16384 00:09:06.434 }, 00:09:06.434 "queue_depth": 1024, 00:09:06.434 "io_size": 4096, 00:09:06.434 "runtime": 10.064914, 00:09:06.434 "iops": 12471.442875716573, 00:09:06.434 "mibps": 48.716573733267865, 00:09:06.434 "io_failed": 0, 00:09:06.434 "io_timeout": 0, 00:09:06.434 "avg_latency_us": 81833.93216920763, 00:09:06.434 "min_latency_us": 19099.062857142857, 00:09:06.434 "max_latency_us": 54176.426666666666 00:09:06.434 } 00:09:06.434 ], 00:09:06.434 "core_count": 1 00:09:06.434 } 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 832529 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 832529 ']' 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 832529 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 832529 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 832529' 00:09:06.434 killing process with pid 832529 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 832529 00:09:06.434 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.434 00:09:06.434 Latency(us) 00:09:06.434 [2024-12-16T15:14:55.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.434 [2024-12-16T15:14:55.043Z] =================================================================================================================== 00:09:06.434 [2024-12-16T15:14:55.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 832529 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.434 rmmod nvme_tcp 00:09:06.434 rmmod nvme_fabrics 00:09:06.434 rmmod nvme_keyring 00:09:06.434 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 832431 ']' 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 832431 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 832431 ']' 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 832431 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.434 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 832431 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 832431' 00:09:06.696 killing process with pid 832431 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 832431 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 832431 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.696 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.697 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.697 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.234 00:09:09.234 real 0m19.741s 00:09:09.234 user 0m23.020s 00:09:09.234 sys 0m6.011s 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.234 ************************************ 00:09:09.234 END TEST nvmf_queue_depth 00:09:09.234 ************************************ 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.234 ************************************ 00:09:09.234 START TEST nvmf_target_multipath 00:09:09.234 ************************************ 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.234 * Looking for test storage... 00:09:09.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.234 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.235 --rc genhtml_branch_coverage=1 00:09:09.235 --rc genhtml_function_coverage=1 00:09:09.235 --rc genhtml_legend=1 00:09:09.235 --rc geninfo_all_blocks=1 00:09:09.235 --rc geninfo_unexecuted_blocks=1 00:09:09.235 00:09:09.235 ' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.235 --rc genhtml_branch_coverage=1 00:09:09.235 --rc genhtml_function_coverage=1 00:09:09.235 --rc genhtml_legend=1 00:09:09.235 --rc geninfo_all_blocks=1 00:09:09.235 --rc geninfo_unexecuted_blocks=1 00:09:09.235 00:09:09.235 ' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.235 --rc genhtml_branch_coverage=1 00:09:09.235 --rc genhtml_function_coverage=1 00:09:09.235 --rc genhtml_legend=1 00:09:09.235 --rc geninfo_all_blocks=1 00:09:09.235 --rc geninfo_unexecuted_blocks=1 00:09:09.235 00:09:09.235 ' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.235 --rc genhtml_branch_coverage=1 00:09:09.235 --rc genhtml_function_coverage=1 00:09:09.235 --rc genhtml_legend=1 00:09:09.235 --rc geninfo_all_blocks=1 00:09:09.235 --rc geninfo_unexecuted_blocks=1 00:09:09.235 00:09:09.235 ' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.235 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.236 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:15.976 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:15.976 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:15.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:15.977 Found net devices under 0000:af:00.0: cvl_0_0 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:15.977 Found net devices under 0000:af:00.1: cvl_0_1 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:09:15.977 00:09:15.977 --- 10.0.0.2 ping statistics --- 00:09:15.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.977 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:15.977 00:09:15.977 --- 10.0.0.1 ping statistics --- 00:09:15.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.977 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:15.977 only one NIC for nvmf test 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.977 rmmod nvme_tcp 00:09:15.977 rmmod nvme_fabrics 00:09:15.977 rmmod nvme_keyring 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:15.977 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.978 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.386 00:09:17.386 real 0m8.300s 00:09:17.386 user 0m1.843s 00:09:17.386 sys 0m4.448s 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:17.386 ************************************ 00:09:17.386 END TEST nvmf_target_multipath 00:09:17.386 ************************************ 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.386 ************************************ 00:09:17.386 START TEST nvmf_zcopy 00:09:17.386 ************************************ 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:17.386 * Looking for test storage... 00:09:17.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.386 --rc genhtml_branch_coverage=1 00:09:17.386 --rc genhtml_function_coverage=1 00:09:17.386 --rc genhtml_legend=1 00:09:17.386 --rc geninfo_all_blocks=1 00:09:17.386 --rc geninfo_unexecuted_blocks=1 00:09:17.386 00:09:17.386 ' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.386 --rc genhtml_branch_coverage=1 00:09:17.386 --rc genhtml_function_coverage=1 00:09:17.386 --rc genhtml_legend=1 00:09:17.386 --rc geninfo_all_blocks=1 00:09:17.386 --rc geninfo_unexecuted_blocks=1 00:09:17.386 00:09:17.386 ' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.386 --rc genhtml_branch_coverage=1 00:09:17.386 --rc genhtml_function_coverage=1 00:09:17.386 --rc genhtml_legend=1 00:09:17.386 --rc geninfo_all_blocks=1 00:09:17.386 --rc geninfo_unexecuted_blocks=1 00:09:17.386 00:09:17.386 ' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.386 --rc genhtml_branch_coverage=1 00:09:17.386 --rc genhtml_function_coverage=1 00:09:17.386 --rc genhtml_legend=1 00:09:17.386 --rc geninfo_all_blocks=1 00:09:17.386 --rc geninfo_unexecuted_blocks=1 00:09:17.386 00:09:17.386 ' 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.386 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.387 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.958 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:23.959 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:23.959 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:23.959 Found net devices under 0000:af:00.0: cvl_0_0 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:23.959 Found net devices under 0000:af:00.1: cvl_0_1 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:09:23.959 00:09:23.959 --- 10.0.0.2 ping statistics --- 00:09:23.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.959 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:09:23.959 00:09:23.959 --- 10.0.0.1 ping statistics --- 00:09:23.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.959 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=841904 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 841904 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 841904 ']' 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.959 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.959 [2024-12-16 16:15:11.873477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:23.960 [2024-12-16 16:15:11.873526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.960 [2024-12-16 16:15:11.952498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.960 [2024-12-16 16:15:11.974038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.960 [2024-12-16 16:15:11.974070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.960 [2024-12-16 16:15:11.974077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.960 [2024-12-16 16:15:11.974082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.960 [2024-12-16 16:15:11.974088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.960 [2024-12-16 16:15:11.974579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 [2024-12-16 16:15:12.105344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 [2024-12-16 16:15:12.129538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 malloc0 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.960 { 00:09:23.960 "params": { 00:09:23.960 "name": "Nvme$subsystem", 00:09:23.960 "trtype": "$TEST_TRANSPORT", 00:09:23.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.960 "adrfam": "ipv4", 00:09:23.960 "trsvcid": "$NVMF_PORT", 00:09:23.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.960 "hdgst": ${hdgst:-false}, 00:09:23.960 "ddgst": ${ddgst:-false} 00:09:23.960 }, 00:09:23.960 "method": "bdev_nvme_attach_controller" 00:09:23.960 } 00:09:23.960 EOF 00:09:23.960 )") 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:23.960 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.960 "params": { 00:09:23.960 "name": "Nvme1", 00:09:23.960 "trtype": "tcp", 00:09:23.960 "traddr": "10.0.0.2", 00:09:23.960 "adrfam": "ipv4", 00:09:23.960 "trsvcid": "4420", 00:09:23.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.960 "hdgst": false, 00:09:23.960 "ddgst": false 00:09:23.960 }, 00:09:23.960 "method": "bdev_nvme_attach_controller" 00:09:23.960 }' 00:09:23.960 [2024-12-16 16:15:12.211969] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:23.960 [2024-12-16 16:15:12.212011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841930 ] 00:09:23.960 [2024-12-16 16:15:12.282689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.960 [2024-12-16 16:15:12.305097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.960 Running I/O for 10 seconds... 00:09:26.274 8676.00 IOPS, 67.78 MiB/s [2024-12-16T15:15:15.819Z] 8773.00 IOPS, 68.54 MiB/s [2024-12-16T15:15:16.756Z] 8802.33 IOPS, 68.77 MiB/s [2024-12-16T15:15:17.692Z] 8815.75 IOPS, 68.87 MiB/s [2024-12-16T15:15:18.629Z] 8827.00 IOPS, 68.96 MiB/s [2024-12-16T15:15:19.565Z] 8827.17 IOPS, 68.96 MiB/s [2024-12-16T15:15:20.955Z] 8837.43 IOPS, 69.04 MiB/s [2024-12-16T15:15:21.892Z] 8826.88 IOPS, 68.96 MiB/s [2024-12-16T15:15:22.829Z] 8829.89 IOPS, 68.98 MiB/s [2024-12-16T15:15:22.829Z] 8829.50 IOPS, 68.98 MiB/s 00:09:34.220 Latency(us) 00:09:34.220 [2024-12-16T15:15:22.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.220 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:34.220 Verification LBA range: start 0x0 length 0x1000 00:09:34.220 Nvme1n1 : 10.01 8832.80 69.01 0.00 0.00 14450.13 401.80 22843.98 00:09:34.220 [2024-12-16T15:15:22.829Z] =================================================================================================================== 00:09:34.220 [2024-12-16T15:15:22.829Z] Total : 8832.80 69.01 0.00 0.00 14450.13 401.80 22843.98 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=843714 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.220 { 00:09:34.220 "params": { 00:09:34.220 "name": "Nvme$subsystem", 00:09:34.220 "trtype": "$TEST_TRANSPORT", 00:09:34.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.220 "adrfam": "ipv4", 00:09:34.220 "trsvcid": "$NVMF_PORT", 00:09:34.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.220 "hdgst": ${hdgst:-false}, 00:09:34.220 "ddgst": ${ddgst:-false} 00:09:34.220 }, 00:09:34.220 "method": "bdev_nvme_attach_controller" 00:09:34.220 } 00:09:34.220 EOF 00:09:34.220 )") 00:09:34.220 [2024-12-16 16:15:22.689674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.220 [2024-12-16 16:15:22.689706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:34.220 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.220 "params": { 00:09:34.220 "name": "Nvme1", 00:09:34.220 "trtype": "tcp", 00:09:34.220 "traddr": "10.0.0.2", 00:09:34.220 "adrfam": "ipv4", 00:09:34.220 "trsvcid": "4420", 00:09:34.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.220 "hdgst": false, 00:09:34.220 "ddgst": false 00:09:34.220 }, 00:09:34.220 "method": "bdev_nvme_attach_controller" 00:09:34.220 }' 00:09:34.221 [2024-12-16 16:15:22.701681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.701694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.713709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.713719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.725742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.725752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.730763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:34.221 [2024-12-16 16:15:22.730803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843714 ] 00:09:34.221 [2024-12-16 16:15:22.737776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.737786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.749804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.749813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.761839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.761849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.773868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.773877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.785900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.785909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.797932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.797941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.804774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.221 [2024-12-16 16:15:22.809964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.809975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.822014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.221 [2024-12-16 16:15:22.822028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.221 [2024-12-16 16:15:22.827157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.480 [2024-12-16 16:15:22.834031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.834042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.846075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.846099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.858105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.858121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.870130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.870145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.882162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.882175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.894192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.894205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.906320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.906333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.918360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.918378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.930383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.930397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.942422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.942439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.954464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.954478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:22.966483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:22.966495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:23.013385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:23.013403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:23.022637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:23.022648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 Running I/O for 5 seconds... 00:09:34.480 [2024-12-16 16:15:23.038980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:23.038999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:23.049609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:23.049626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:23.063682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:23.063701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-12-16 16:15:23.077672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-12-16 16:15:23.077690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.091814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.091833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.105726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.105744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.119636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.119654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.133462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.133488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.146760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.146778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.160285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.160303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.169229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.169248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.178889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.178907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.739 [2024-12-16 16:15:23.193105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.739 [2024-12-16 16:15:23.193123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.207042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.207061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.220859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.220878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.234336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.234354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.242933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.242955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.251985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.252003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.261286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.261304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.275485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.275504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.289036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.289057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.302277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.302295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.315940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.315958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.329657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.329675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.740 [2024-12-16 16:15:23.343188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.740 [2024-12-16 16:15:23.343206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.357384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.357402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.370717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.370736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.384340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.384358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.398022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.398042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.412218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.412236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.423695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.423713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.437677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.437696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.451167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.451185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.005 [2024-12-16 16:15:23.465013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.005 [2024-12-16 16:15:23.465031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.478768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.478786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.492030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.492053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.505734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.505752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.519334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.519351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.532704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.532722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.546510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.546527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.559868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.559886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.573384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.573402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.587184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.587202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.596363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.596381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.006 [2024-12-16 16:15:23.610332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.006 [2024-12-16 16:15:23.610351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.264 [2024-12-16 16:15:23.624080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.264 [2024-12-16 16:15:23.624107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.264 [2024-12-16 16:15:23.637791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.264 [2024-12-16 16:15:23.637810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.264 [2024-12-16 16:15:23.651730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.264 [2024-12-16 16:15:23.651749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.264 [2024-12-16 16:15:23.660650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.264 [2024-12-16 16:15:23.660668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.264 [2024-12-16 16:15:23.674636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.264 [2024-12-16 16:15:23.674655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.264 [2024-12-16 16:15:23.688479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.264 [2024-12-16 16:15:23.688498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.697249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.697267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.711694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.711713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.725347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.725366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.738835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.738861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.753066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.753085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.763696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.763714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.777823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.777842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.791945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.791964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.805263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.805283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.819006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.819025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.833087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.833114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.843851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.843870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.858308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.858327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-12-16 16:15:23.872365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-12-16 16:15:23.872383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.886161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.886180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.899767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.899787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.913510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.913530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.926816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.926835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.940584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.940602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.953804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.953823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.967968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.967987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.981513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.981531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:23.995230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:23.995252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.008603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.008621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.022487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.022504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 17033.00 IOPS, 133.07 MiB/s [2024-12-16T15:15:24.133Z] [2024-12-16 16:15:24.036336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.036364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.045296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.045314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.059582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.059600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.072679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.072696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.086609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.086627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.100092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.100117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.109014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.109032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.524 [2024-12-16 16:15:24.118302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.524 [2024-12-16 16:15:24.118319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.132479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.132497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.146467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.146485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.157588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.157606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.171578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.171596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.185283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.185301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.198865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.198883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.212114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.212131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.226827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.226845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.237730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.237748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.252520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.252537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.267532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.267551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.281496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.281514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.294915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.294933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.308831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.308850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.322216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.322234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.335904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.783 [2024-12-16 16:15:24.335923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.783 [2024-12-16 16:15:24.344858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.784 [2024-12-16 16:15:24.344875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.784 [2024-12-16 16:15:24.354198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.784 [2024-12-16 16:15:24.354217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.784 [2024-12-16 16:15:24.363317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.784 [2024-12-16 16:15:24.363335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.784 [2024-12-16 16:15:24.371998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.784 [2024-12-16 16:15:24.372016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.784 [2024-12-16 16:15:24.386677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.784 [2024-12-16 16:15:24.386696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.397345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.397362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.411874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.411892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.422697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.422715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.436963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.436981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.450632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.450650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.464511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.464529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.478083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.478109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.491712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.491730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.505629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.505647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.518846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.518865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.532539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.532557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.545810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.545829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.559479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.559497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.573563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.573581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.587163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.587181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.601182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.601200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.614822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.614840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.628972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.628989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.043 [2024-12-16 16:15:24.642559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.043 [2024-12-16 16:15:24.642577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.656640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.656658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.670429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.670446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.684092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.684115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.698156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.698175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.712336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.712354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.725973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.725991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.739959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.739977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.754049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.754067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.768060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.768078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.781499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.781519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.795181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.795199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.808907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.808924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.822966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.822984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.831635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.831654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.845755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.845772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.859287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.859305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.873644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.873661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.887520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.887538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.302 [2024-12-16 16:15:24.900952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.302 [2024-12-16 16:15:24.900970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.914926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.914944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.928341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.928359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.941862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.941880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.955945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.955963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.970152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.970170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.981222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.981246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:24.995721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:24.995740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.004672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.004691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.019204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.019222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.032912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.032931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 17005.00 IOPS, 132.85 MiB/s [2024-12-16T15:15:25.170Z] [2024-12-16 16:15:25.046704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.046722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.060238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.060257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.074274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.074293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.083521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.083539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.097661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.097679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.111393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.111412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.125088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.125113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.139066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.139084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.561 [2024-12-16 16:15:25.152697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.561 [2024-12-16 16:15:25.152714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.562 [2024-12-16 16:15:25.166708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.562 [2024-12-16 16:15:25.166727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.175583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.175601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.189727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.189746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.203177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.203195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.217078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.217104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.230774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.230796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.244016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.244034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.257677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.257696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.271569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.271588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.285392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.285410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.298795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.298813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.312533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.312551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.326310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.326329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.335173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.335192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.349570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.349589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.363626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.363645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.377764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.377782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.391827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.391844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.405703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.405721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.820 [2024-12-16 16:15:25.419310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.820 [2024-12-16 16:15:25.419328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.433280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.433298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.447215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.447233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.461125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.461143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.474590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.474608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.483376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.483398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.497585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.497604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.511491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.511509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.525120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.525138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.538923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.538940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.552341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.552358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.566471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.566489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.580544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.580561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.594158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.594176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.602975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.602992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.617159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.617177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.630578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.630596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.644119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.644137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.657904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.657922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.671473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.671491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.079 [2024-12-16 16:15:25.685238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.079 [2024-12-16 16:15:25.685256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.698684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.698702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.708212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.708230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.722209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.722226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.735617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.735635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.749122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.749140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.762705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.762722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.776513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.776532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.790027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.338 [2024-12-16 16:15:25.790045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.338 [2024-12-16 16:15:25.798766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.798783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.812762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.812780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.826447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.826465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.840402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.840420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.854169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.854187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.867774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.867792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.881480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.881498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.894992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.895010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.909069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.909087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.922837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.922855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.339 [2024-12-16 16:15:25.936691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.339 [2024-12-16 16:15:25.936708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:25.950421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:25.950439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:25.959324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:25.959341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:25.973367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:25.973384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:25.987352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:25.987371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.001007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.001025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.014321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.014340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.028285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.028303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 17025.33 IOPS, 133.01 MiB/s [2024-12-16T15:15:26.207Z] [2024-12-16 16:15:26.041816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.041834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.055723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.055741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.064805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.064822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.078854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.078872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.092169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.092186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.106077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.106098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.120066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.120085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.134000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.134019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.147659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.147677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.161274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.161292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.175380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.175398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.189032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.189050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.598 [2024-12-16 16:15:26.202730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.598 [2024-12-16 16:15:26.202748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.216410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.216427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.230307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.230329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.244446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.244464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.258316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.258334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.272109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.272127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.285547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.285565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.857 [2024-12-16 16:15:26.299135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.857 [2024-12-16 16:15:26.299154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.313177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.313195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.326382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.326400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.335058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.335077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.349222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.349242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.363267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.363286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.374350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.374368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.388372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.388391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.402245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.402264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.413071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.413089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.422466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.422484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.431797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.431815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.446253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.446272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.858 [2024-12-16 16:15:26.460071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.858 [2024-12-16 16:15:26.460090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.473559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.473582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.487303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.487321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.500895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.500914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.514755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.514774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.528778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.528798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.542718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.542737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.551652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.551670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.561027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.561045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.575659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.575677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.590090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.590114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.603647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.603665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.617484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.617504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.631184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.631203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.644945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.644963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.659442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.659460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.675215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.675234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.689703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.689721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.700714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.700731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.117 [2024-12-16 16:15:26.715390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.117 [2024-12-16 16:15:26.715409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.729629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.729652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.743640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.743658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.758184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.758202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.773403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.773421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.787465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.787483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.801390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.801408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.810213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.810230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.824333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.824362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.376 [2024-12-16 16:15:26.837955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.376 [2024-12-16 16:15:26.837973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.851713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.851731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.865657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.865675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.879279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.879298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.893271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.893289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.906989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.907007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.921376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.921393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.931957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.931974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.946233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.946252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.959890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.959908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.377 [2024-12-16 16:15:26.973551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.377 [2024-12-16 16:15:26.973568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.635 [2024-12-16 16:15:26.987277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.635 [2024-12-16 16:15:26.987299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.635 [2024-12-16 16:15:27.001153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.635 [2024-12-16 16:15:27.001171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.635 [2024-12-16 16:15:27.014931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.635 [2024-12-16 16:15:27.014950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.635 [2024-12-16 16:15:27.028756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.635 [2024-12-16 16:15:27.028774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 17008.25 IOPS, 132.88 MiB/s [2024-12-16T15:15:27.245Z] [2024-12-16 16:15:27.042655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.042673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.056369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.056387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.070149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.070167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.084212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.084230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.097877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.097895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.111290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.111308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.124912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.124930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.138921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.138939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.152501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.152519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.166392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.166410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.180188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.180206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.193644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.193662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.207181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.207199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.221197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.221215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.636 [2024-12-16 16:15:27.234936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.636 [2024-12-16 16:15:27.234954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.249334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.249352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.264486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.264504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.278942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.278960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.292448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.292465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.306180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.306198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.320017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.320035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.333968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.333986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.347241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.347259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.360785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.360802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.374559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.374577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.387959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.387979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.401859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.401878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.415413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.415431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.429643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.429662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.443349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.443378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.457189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.457207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.471294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.471314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.484874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.484892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.894 [2024-12-16 16:15:27.499165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.894 [2024-12-16 16:15:27.499183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.152 [2024-12-16 16:15:27.514572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.152 [2024-12-16 16:15:27.514591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.152 [2024-12-16 16:15:27.528615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.152 [2024-12-16 16:15:27.528633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.542031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.542049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.555877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.555894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.569593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.569611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.583710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.583729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.597775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.597793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.611130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.611149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.624891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.624910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.638605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.638623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.652305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.652324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.665720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.665737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.679156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.679175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.693107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.693127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.707102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.707121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.720924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.720944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.734708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.734726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.153 [2024-12-16 16:15:27.748437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.153 [2024-12-16 16:15:27.748454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.762148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.762167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.775838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.775858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.789511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.789530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.803565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.803583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.817221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.817241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.831042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.831060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.844764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.844783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.858309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.858327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.871949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.871968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.885552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.885572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.899141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.899160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.913016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.913035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.926953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.926972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.940585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.940604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.954588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.954607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.968396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.968415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.982136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.982155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:27.995867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:27.995886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.412 [2024-12-16 16:15:28.009685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.412 [2024-12-16 16:15:28.009704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 [2024-12-16 16:15:28.023107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.671 [2024-12-16 16:15:28.023131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 [2024-12-16 16:15:28.036832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.671 [2024-12-16 16:15:28.036851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 16997.00 IOPS, 132.79 MiB/s [2024-12-16T15:15:28.280Z] [2024-12-16 16:15:28.046990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.671 [2024-12-16 16:15:28.047007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 00:09:39.671 Latency(us) 00:09:39.671 [2024-12-16T15:15:28.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.671 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:39.671 Nvme1n1 : 5.01 17002.50 132.83 0.00 0.00 7521.08 3479.65 14667.58 00:09:39.671 [2024-12-16T15:15:28.280Z] =================================================================================================================== 00:09:39.671 [2024-12-16T15:15:28.280Z] Total : 17002.50 132.83 0.00 0.00 7521.08 3479.65 14667.58 00:09:39.671 [2024-12-16 16:15:28.059015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.671 [2024-12-16 16:15:28.059030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 [2024-12-16 16:15:28.071054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.671 [2024-12-16 16:15:28.071070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 [2024-12-16 16:15:28.083085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.671 [2024-12-16 16:15:28.083109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.671 [2024-12-16 16:15:28.095116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.095130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.107150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.107167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.119176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.119192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.131210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.131227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.143237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.143250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.155281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.155294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.167302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.167312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.179336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.179350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.191364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.191374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 [2024-12-16 16:15:28.203397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.672 [2024-12-16 16:15:28.203407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (843714) - No such process 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 843714 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.672 delay0 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.672 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:39.931 [2024-12-16 16:15:28.348735] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.501 Initializing NVMe Controllers 00:09:46.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:46.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:46.501 Initialization complete. Launching workers. 00:09:46.501 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1119 00:09:46.501 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1389, failed to submit 50 00:09:46.501 success 1241, unsuccessful 148, failed 0 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.501 rmmod nvme_tcp 00:09:46.501 rmmod nvme_fabrics 00:09:46.501 rmmod nvme_keyring 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 841904 ']' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 841904 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 841904 ']' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 841904 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 841904 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 841904' 00:09:46.501 killing process with pid 841904 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 841904 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 841904 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.501 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.406 00:09:48.406 real 0m31.132s 00:09:48.406 user 0m41.655s 00:09:48.406 sys 0m10.942s 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.406 ************************************ 00:09:48.406 END TEST nvmf_zcopy 00:09:48.406 ************************************ 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.406 ************************************ 00:09:48.406 START TEST nvmf_nmic 00:09:48.406 ************************************ 00:09:48.406 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:48.666 * Looking for test storage... 00:09:48.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.666 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.666 --rc genhtml_branch_coverage=1 00:09:48.666 --rc genhtml_function_coverage=1 00:09:48.666 --rc genhtml_legend=1 00:09:48.666 --rc geninfo_all_blocks=1 00:09:48.666 --rc geninfo_unexecuted_blocks=1 00:09:48.666 00:09:48.666 ' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.667 --rc genhtml_branch_coverage=1 00:09:48.667 --rc genhtml_function_coverage=1 00:09:48.667 --rc genhtml_legend=1 00:09:48.667 --rc geninfo_all_blocks=1 00:09:48.667 --rc geninfo_unexecuted_blocks=1 00:09:48.667 00:09:48.667 ' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.667 --rc genhtml_branch_coverage=1 00:09:48.667 --rc genhtml_function_coverage=1 00:09:48.667 --rc genhtml_legend=1 00:09:48.667 --rc geninfo_all_blocks=1 00:09:48.667 --rc geninfo_unexecuted_blocks=1 00:09:48.667 00:09:48.667 ' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.667 --rc genhtml_branch_coverage=1 00:09:48.667 --rc genhtml_function_coverage=1 00:09:48.667 --rc genhtml_legend=1 00:09:48.667 --rc geninfo_all_blocks=1 00:09:48.667 --rc geninfo_unexecuted_blocks=1 00:09:48.667 00:09:48.667 ' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.667 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.241 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:55.242 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:55.242 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:55.242 Found net devices under 0000:af:00.0: cvl_0_0 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:55.242 Found net devices under 0000:af:00.1: cvl_0_1 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.242 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:09:55.242 00:09:55.242 --- 10.0.0.2 ping statistics --- 00:09:55.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.242 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:55.242 00:09:55.242 --- 10.0.0.1 ping statistics --- 00:09:55.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.242 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=849176 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 849176 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 849176 ']' 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.242 [2024-12-16 16:15:43.193017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:55.242 [2024-12-16 16:15:43.193066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.242 [2024-12-16 16:15:43.272495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.242 [2024-12-16 16:15:43.296139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.242 [2024-12-16 16:15:43.296179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.242 [2024-12-16 16:15:43.296186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.242 [2024-12-16 16:15:43.296192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.242 [2024-12-16 16:15:43.296197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.242 [2024-12-16 16:15:43.297612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.242 [2024-12-16 16:15:43.297721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.242 [2024-12-16 16:15:43.297805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.242 [2024-12-16 16:15:43.297806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.242 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 [2024-12-16 16:15:43.438216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 Malloc0 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 [2024-12-16 16:15:43.502228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:55.243 test case1: single bdev can't be used in multiple subsystems 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 [2024-12-16 16:15:43.526120] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:55.243 [2024-12-16 16:15:43.526139] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:55.243 [2024-12-16 16:15:43.526147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.243 request: 00:09:55.243 { 00:09:55.243 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:55.243 "namespace": { 00:09:55.243 "bdev_name": "Malloc0", 00:09:55.243 "no_auto_visible": false, 00:09:55.243 "hide_metadata": false 00:09:55.243 }, 00:09:55.243 "method": "nvmf_subsystem_add_ns", 00:09:55.243 "req_id": 1 00:09:55.243 } 00:09:55.243 Got JSON-RPC error response 00:09:55.243 response: 00:09:55.243 { 00:09:55.243 "code": -32602, 00:09:55.243 "message": "Invalid parameters" 00:09:55.243 } 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:55.243 Adding namespace failed - expected result. 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:55.243 test case2: host connect to nvmf target in multiple paths 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.243 [2024-12-16 16:15:43.534243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.243 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:56.180 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:57.555 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.555 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:57.555 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.555 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:57.555 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:59.461 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:59.461 [global] 00:09:59.461 thread=1 00:09:59.461 invalidate=1 00:09:59.461 rw=write 00:09:59.461 time_based=1 00:09:59.461 runtime=1 00:09:59.461 ioengine=libaio 00:09:59.461 direct=1 00:09:59.461 bs=4096 00:09:59.461 iodepth=1 00:09:59.461 norandommap=0 00:09:59.461 numjobs=1 00:09:59.461 00:09:59.461 verify_dump=1 00:09:59.461 verify_backlog=512 00:09:59.461 verify_state_save=0 00:09:59.461 do_verify=1 00:09:59.461 verify=crc32c-intel 00:09:59.461 [job0] 00:09:59.461 filename=/dev/nvme0n1 00:09:59.461 Could not set queue depth (nvme0n1) 00:09:59.720 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.720 fio-3.35 00:09:59.720 Starting 1 thread 00:10:01.098 00:10:01.098 job0: (groupid=0, jobs=1): err= 0: pid=850041: Mon Dec 16 16:15:49 2024 00:10:01.098 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:01.098 slat (nsec): min=6416, max=26675, avg=7174.42, stdev=863.55 00:10:01.098 clat (usec): min=184, max=320, avg=216.54, stdev=10.82 00:10:01.098 lat (usec): min=191, max=328, avg=223.72, stdev=10.86 00:10:01.098 clat percentiles (usec): 00:10:01.098 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:10:01.098 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:10:01.098 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 227], 95.00th=[ 237], 00:10:01.098 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 273], 99.95th=[ 285], 00:10:01.098 | 99.99th=[ 322] 00:10:01.098 write: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:10:01.098 slat (nsec): min=9328, max=69613, avg=10300.72, stdev=1535.53 00:10:01.098 clat (usec): min=109, max=423, avg=141.73, stdev=27.33 00:10:01.098 lat (usec): min=122, max=493, avg=152.03, stdev=27.67 00:10:01.098 clat percentiles (usec): 00:10:01.098 | 1.00th=[ 117], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 123], 00:10:01.098 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:10:01.098 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 180], 00:10:01.098 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 277], 99.95th=[ 277], 00:10:01.098 | 99.99th=[ 424] 00:10:01.098 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:10:01.098 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:01.098 lat (usec) : 250=98.60%, 500=1.40% 00:10:01.098 cpu : usr=3.10%, sys=4.30%, ctx=5293, majf=0, minf=1 00:10:01.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.098 issued rwts: total=2560,2733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.098 00:10:01.098 Run status group 0 (all jobs): 00:10:01.098 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:01.098 WRITE: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:10:01.098 00:10:01.098 Disk stats (read/write): 00:10:01.098 nvme0n1: ios=2319/2560, merge=0/0, ticks=505/330, in_queue=835, util=91.18% 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.098 rmmod nvme_tcp 00:10:01.098 rmmod nvme_fabrics 00:10:01.098 rmmod nvme_keyring 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 849176 ']' 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 849176 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 849176 ']' 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 849176 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849176 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849176' 00:10:01.098 killing process with pid 849176 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 849176 00:10:01.098 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 849176 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.358 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.899 00:10:03.899 real 0m15.000s 00:10:03.899 user 0m33.426s 00:10:03.899 sys 0m5.332s 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.899 ************************************ 00:10:03.899 END TEST nvmf_nmic 00:10:03.899 ************************************ 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.899 16:15:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.899 ************************************ 00:10:03.899 START TEST nvmf_fio_target 00:10:03.899 ************************************ 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:03.899 * Looking for test storage... 00:10:03.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.899 --rc genhtml_branch_coverage=1 00:10:03.899 --rc genhtml_function_coverage=1 00:10:03.899 --rc genhtml_legend=1 00:10:03.899 --rc geninfo_all_blocks=1 00:10:03.899 --rc geninfo_unexecuted_blocks=1 00:10:03.899 00:10:03.899 ' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.899 --rc genhtml_branch_coverage=1 00:10:03.899 --rc genhtml_function_coverage=1 00:10:03.899 --rc genhtml_legend=1 00:10:03.899 --rc geninfo_all_blocks=1 00:10:03.899 --rc geninfo_unexecuted_blocks=1 00:10:03.899 00:10:03.899 ' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.899 --rc genhtml_branch_coverage=1 00:10:03.899 --rc genhtml_function_coverage=1 00:10:03.899 --rc genhtml_legend=1 00:10:03.899 --rc geninfo_all_blocks=1 00:10:03.899 --rc geninfo_unexecuted_blocks=1 00:10:03.899 00:10:03.899 ' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.899 --rc genhtml_branch_coverage=1 00:10:03.899 --rc genhtml_function_coverage=1 00:10:03.899 --rc genhtml_legend=1 00:10:03.899 --rc geninfo_all_blocks=1 00:10:03.899 --rc geninfo_unexecuted_blocks=1 00:10:03.899 00:10:03.899 ' 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.899 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.900 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:10.477 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:10.477 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:10.477 Found net devices under 0000:af:00.0: cvl_0_0 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:10.477 Found net devices under 0000:af:00.1: cvl_0_1 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.477 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.477 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.477 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.477 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.477 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:10:10.478 00:10:10.478 --- 10.0.0.2 ping statistics --- 00:10:10.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.478 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:10:10.478 00:10:10.478 --- 10.0.0.1 ping statistics --- 00:10:10.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.478 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=853801 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 853801 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 853801 ']' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.478 [2024-12-16 16:15:58.268868] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:10.478 [2024-12-16 16:15:58.268915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.478 [2024-12-16 16:15:58.349020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.478 [2024-12-16 16:15:58.372106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.478 [2024-12-16 16:15:58.372144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.478 [2024-12-16 16:15:58.372151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.478 [2024-12-16 16:15:58.372160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.478 [2024-12-16 16:15:58.372165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.478 [2024-12-16 16:15:58.373471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.478 [2024-12-16 16:15:58.373585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.478 [2024-12-16 16:15:58.373693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.478 [2024-12-16 16:15:58.373693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:10.478 [2024-12-16 16:15:58.675021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:10.478 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.738 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:10.738 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.997 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:10.997 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.997 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:10.997 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:11.256 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.515 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:11.515 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.774 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:11.774 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.033 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:12.033 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:12.033 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.293 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.293 16:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.552 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.552 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.810 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.810 [2024-12-16 16:16:01.402079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.069 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:13.069 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:13.329 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.709 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:14.709 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.709 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.709 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:14.709 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:14.709 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:16.615 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:16.615 [global] 00:10:16.615 thread=1 00:10:16.615 invalidate=1 00:10:16.615 rw=write 00:10:16.615 time_based=1 00:10:16.615 runtime=1 00:10:16.615 ioengine=libaio 00:10:16.615 direct=1 00:10:16.615 bs=4096 00:10:16.615 iodepth=1 00:10:16.615 norandommap=0 00:10:16.615 numjobs=1 00:10:16.615 00:10:16.615 verify_dump=1 00:10:16.615 verify_backlog=512 00:10:16.615 verify_state_save=0 00:10:16.615 do_verify=1 00:10:16.615 verify=crc32c-intel 00:10:16.615 [job0] 00:10:16.615 filename=/dev/nvme0n1 00:10:16.615 [job1] 00:10:16.615 filename=/dev/nvme0n2 00:10:16.615 [job2] 00:10:16.615 filename=/dev/nvme0n3 00:10:16.615 [job3] 00:10:16.615 filename=/dev/nvme0n4 00:10:16.615 Could not set queue depth (nvme0n1) 00:10:16.615 Could not set queue depth (nvme0n2) 00:10:16.615 Could not set queue depth (nvme0n3) 00:10:16.615 Could not set queue depth (nvme0n4) 00:10:16.874 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.874 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.874 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.874 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.874 fio-3.35 00:10:16.874 Starting 4 threads 00:10:18.273 00:10:18.273 job0: (groupid=0, jobs=1): err= 0: pid=855268: Mon Dec 16 16:16:06 2024 00:10:18.273 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:10:18.273 slat (nsec): min=9696, max=24922, avg=21761.35, stdev=3896.09 00:10:18.273 clat (usec): min=260, max=42062, avg=39365.80, stdev=8534.93 00:10:18.273 lat (usec): min=284, max=42084, avg=39387.56, stdev=8534.47 00:10:18.273 clat percentiles (usec): 00:10:18.273 | 1.00th=[ 262], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:18.273 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.273 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:18.273 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:18.273 | 99.99th=[42206] 00:10:18.273 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:18.273 slat (nsec): min=9719, max=54803, avg=11419.57, stdev=2885.76 00:10:18.273 clat (usec): min=118, max=296, avg=177.16, stdev=20.18 00:10:18.273 lat (usec): min=142, max=318, avg=188.58, stdev=20.47 00:10:18.273 clat percentiles (usec): 00:10:18.273 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:18.273 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:18.273 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 219], 00:10:18.273 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 297], 00:10:18.273 | 99.99th=[ 297] 00:10:18.273 bw ( KiB/s): min= 4096, max= 4096, per=23.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.274 lat (usec) : 250=94.39%, 500=1.50% 00:10:18.274 lat (msec) : 50=4.11% 00:10:18.274 cpu : usr=0.40%, sys=0.90%, ctx=535, majf=0, minf=1 00:10:18.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.274 job1: (groupid=0, jobs=1): err= 0: pid=855270: Mon Dec 16 16:16:06 2024 00:10:18.274 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:18.274 slat (nsec): min=6787, max=38947, avg=7919.01, stdev=1334.93 00:10:18.274 clat (usec): min=161, max=2324, avg=201.30, stdev=50.13 00:10:18.274 lat (usec): min=169, max=2336, avg=209.22, stdev=50.28 00:10:18.274 clat percentiles (usec): 00:10:18.274 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:10:18.274 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:10:18.274 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 231], 00:10:18.274 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 474], 99.95th=[ 1270], 00:10:18.274 | 99.99th=[ 2311] 00:10:18.274 write: IOPS=2868, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:10:18.274 slat (nsec): min=9774, max=50563, avg=11055.98, stdev=2041.82 00:10:18.274 clat (usec): min=109, max=603, avg=145.34, stdev=27.17 00:10:18.274 lat (usec): min=119, max=614, avg=156.40, stdev=27.58 00:10:18.274 clat percentiles (usec): 00:10:18.274 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 128], 00:10:18.274 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:10:18.274 | 70.00th=[ 149], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 192], 00:10:18.274 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 285], 99.95th=[ 351], 00:10:18.274 | 99.99th=[ 603] 00:10:18.274 bw ( KiB/s): min=12288, max=12288, per=71.94%, avg=12288.00, stdev= 0.00, samples=1 00:10:18.274 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:18.274 lat (usec) : 250=98.99%, 500=0.96%, 750=0.02% 00:10:18.274 lat (msec) : 2=0.02%, 4=0.02% 00:10:18.274 cpu : usr=4.90%, sys=7.90%, ctx=5431, majf=0, minf=1 00:10:18.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 issued rwts: total=2560,2871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.274 job2: (groupid=0, jobs=1): err= 0: pid=855277: Mon Dec 16 16:16:06 2024 00:10:18.274 read: IOPS=22, BW=91.4KiB/s (93.6kB/s)(92.0KiB/1007msec) 00:10:18.274 slat (nsec): min=10284, max=15968, avg=13391.13, stdev=1115.68 00:10:18.274 clat (usec): min=274, max=43019, avg=39350.44, stdev=8531.14 00:10:18.274 lat (usec): min=288, max=43033, avg=39363.83, stdev=8530.96 00:10:18.274 clat percentiles (usec): 00:10:18.274 | 1.00th=[ 273], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:18.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:18.274 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:18.274 | 99.99th=[43254] 00:10:18.274 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:18.274 slat (nsec): min=10857, max=36447, avg=13654.54, stdev=2301.24 00:10:18.274 clat (usec): min=146, max=871, avg=181.44, stdev=40.94 00:10:18.274 lat (usec): min=158, max=907, avg=195.10, stdev=42.09 00:10:18.274 clat percentiles (usec): 00:10:18.274 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:10:18.274 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:18.274 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 208], 00:10:18.274 | 99.00th=[ 260], 99.50th=[ 314], 99.90th=[ 873], 99.95th=[ 873], 00:10:18.274 | 99.99th=[ 873] 00:10:18.274 bw ( KiB/s): min= 4096, max= 4096, per=23.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.274 lat (usec) : 250=94.39%, 500=1.12%, 750=0.19%, 1000=0.19% 00:10:18.274 lat (msec) : 50=4.11% 00:10:18.274 cpu : usr=0.50%, sys=0.89%, ctx=538, majf=0, minf=1 00:10:18.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.274 job3: (groupid=0, jobs=1): err= 0: pid=855278: Mon Dec 16 16:16:06 2024 00:10:18.274 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:10:18.274 slat (nsec): min=10915, max=29678, avg=23114.27, stdev=3099.92 00:10:18.274 clat (usec): min=40811, max=42022, avg=41011.91, stdev=236.68 00:10:18.274 lat (usec): min=40834, max=42045, avg=41035.02, stdev=236.89 00:10:18.274 clat percentiles (usec): 00:10:18.274 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:18.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:18.274 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:18.274 | 99.99th=[42206] 00:10:18.274 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:18.274 slat (nsec): min=10974, max=42547, avg=12855.58, stdev=2538.77 00:10:18.274 clat (usec): min=147, max=272, avg=235.46, stdev=17.04 00:10:18.274 lat (usec): min=160, max=283, avg=248.32, stdev=16.80 00:10:18.274 clat percentiles (usec): 00:10:18.274 | 1.00th=[ 157], 5.00th=[ 198], 10.00th=[ 233], 20.00th=[ 237], 00:10:18.274 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:10:18.274 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 243], 95.00th=[ 245], 00:10:18.274 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 273], 00:10:18.274 | 99.99th=[ 273] 00:10:18.274 bw ( KiB/s): min= 4096, max= 4096, per=23.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.274 lat (usec) : 250=94.94%, 500=0.94% 00:10:18.274 lat (msec) : 50=4.12% 00:10:18.274 cpu : usr=0.19%, sys=1.16%, ctx=535, majf=0, minf=1 00:10:18.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.274 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.274 00:10:18.274 Run status group 0 (all jobs): 00:10:18.274 READ: bw=9.95MiB/s (10.4MB/s), 85.3KiB/s-9.99MiB/s (87.3kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1032msec 00:10:18.274 WRITE: bw=16.7MiB/s (17.5MB/s), 1984KiB/s-11.2MiB/s (2032kB/s-11.7MB/s), io=17.2MiB (18.1MB), run=1001-1032msec 00:10:18.274 00:10:18.274 Disk stats (read/write): 00:10:18.274 nvme0n1: ios=69/512, merge=0/0, ticks=768/81, in_queue=849, util=87.07% 00:10:18.274 nvme0n2: ios=2182/2560, merge=0/0, ticks=475/351, in_queue=826, util=91.07% 00:10:18.274 nvme0n3: ios=42/512, merge=0/0, ticks=1646/91, in_queue=1737, util=93.54% 00:10:18.274 nvme0n4: ios=81/512, merge=0/0, ticks=1060/114, in_queue=1174, util=95.28% 00:10:18.274 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:18.274 [global] 00:10:18.274 thread=1 00:10:18.274 invalidate=1 00:10:18.274 rw=randwrite 00:10:18.274 time_based=1 00:10:18.274 runtime=1 00:10:18.274 ioengine=libaio 00:10:18.274 direct=1 00:10:18.274 bs=4096 00:10:18.274 iodepth=1 00:10:18.274 norandommap=0 00:10:18.274 numjobs=1 00:10:18.274 00:10:18.274 verify_dump=1 00:10:18.274 verify_backlog=512 00:10:18.274 verify_state_save=0 00:10:18.274 do_verify=1 00:10:18.274 verify=crc32c-intel 00:10:18.274 [job0] 00:10:18.274 filename=/dev/nvme0n1 00:10:18.274 [job1] 00:10:18.274 filename=/dev/nvme0n2 00:10:18.274 [job2] 00:10:18.274 filename=/dev/nvme0n3 00:10:18.274 [job3] 00:10:18.274 filename=/dev/nvme0n4 00:10:18.274 Could not set queue depth (nvme0n1) 00:10:18.274 Could not set queue depth (nvme0n2) 00:10:18.274 Could not set queue depth (nvme0n3) 00:10:18.274 Could not set queue depth (nvme0n4) 00:10:18.538 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.538 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.538 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.538 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.538 fio-3.35 00:10:18.538 Starting 4 threads 00:10:19.916 00:10:19.916 job0: (groupid=0, jobs=1): err= 0: pid=855639: Mon Dec 16 16:16:08 2024 00:10:19.916 read: IOPS=509, BW=2039KiB/s (2088kB/s)(2108KiB/1034msec) 00:10:19.916 slat (nsec): min=6561, max=23965, avg=7817.26, stdev=2892.63 00:10:19.916 clat (usec): min=179, max=41977, avg=1609.99, stdev=7424.84 00:10:19.916 lat (usec): min=186, max=42000, avg=1617.81, stdev=7427.33 00:10:19.916 clat percentiles (usec): 00:10:19.916 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 204], 00:10:19.916 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:10:19.916 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 243], 00:10:19.916 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:19.916 | 99.99th=[42206] 00:10:19.916 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:10:19.916 slat (nsec): min=9441, max=41144, avg=10789.35, stdev=2145.16 00:10:19.916 clat (usec): min=113, max=294, avg=162.60, stdev=28.05 00:10:19.916 lat (usec): min=124, max=326, avg=173.39, stdev=28.36 00:10:19.916 clat percentiles (usec): 00:10:19.916 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:10:19.916 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:10:19.916 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 237], 00:10:19.916 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 285], 99.95th=[ 293], 00:10:19.916 | 99.99th=[ 293] 00:10:19.916 bw ( KiB/s): min= 8192, max= 8192, per=45.96%, avg=8192.00, stdev= 0.00, samples=1 00:10:19.916 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:19.916 lat (usec) : 250=98.26%, 500=0.58% 00:10:19.916 lat (msec) : 50=1.16% 00:10:19.916 cpu : usr=1.16%, sys=1.16%, ctx=1555, majf=0, minf=1 00:10:19.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.916 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.916 job1: (groupid=0, jobs=1): err= 0: pid=855640: Mon Dec 16 16:16:08 2024 00:10:19.916 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:10:19.916 slat (nsec): min=9691, max=25786, avg=22586.00, stdev=3046.40 00:10:19.916 clat (usec): min=40735, max=41984, avg=41041.98, stdev=315.36 00:10:19.916 lat (usec): min=40745, max=42007, avg=41064.57, stdev=316.28 00:10:19.916 clat percentiles (usec): 00:10:19.916 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:19.916 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:19.916 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:19.916 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.916 | 99.99th=[42206] 00:10:19.916 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:19.916 slat (nsec): min=9763, max=39783, avg=11266.74, stdev=1968.31 00:10:19.916 clat (usec): min=139, max=329, avg=183.57, stdev=22.11 00:10:19.916 lat (usec): min=150, max=340, avg=194.84, stdev=22.45 00:10:19.916 clat percentiles (usec): 00:10:19.916 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:10:19.916 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:10:19.916 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 239], 00:10:19.916 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 330], 99.95th=[ 330], 00:10:19.916 | 99.99th=[ 330] 00:10:19.916 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.916 lat (usec) : 250=94.76%, 500=1.12% 00:10:19.916 lat (msec) : 50=4.12% 00:10:19.916 cpu : usr=0.70%, sys=0.70%, ctx=534, majf=0, minf=2 00:10:19.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.916 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.916 job2: (groupid=0, jobs=1): err= 0: pid=855641: Mon Dec 16 16:16:08 2024 00:10:19.916 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:10:19.916 slat (nsec): min=12239, max=40660, avg=23283.23, stdev=6450.90 00:10:19.916 clat (usec): min=40836, max=42003, avg=41059.44, stdev=305.03 00:10:19.916 lat (usec): min=40877, max=42016, avg=41082.73, stdev=302.32 00:10:19.916 clat percentiles (usec): 00:10:19.916 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:19.916 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:19.916 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:19.916 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.916 | 99.99th=[42206] 00:10:19.916 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:19.916 slat (nsec): min=9587, max=55094, avg=12786.26, stdev=5237.84 00:10:19.916 clat (usec): min=136, max=354, avg=181.37, stdev=18.23 00:10:19.916 lat (usec): min=148, max=395, avg=194.16, stdev=19.32 00:10:19.916 clat percentiles (usec): 00:10:19.916 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:10:19.916 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:19.916 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 212], 00:10:19.916 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 355], 99.95th=[ 355], 00:10:19.916 | 99.99th=[ 355] 00:10:19.916 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.916 lat (usec) : 250=95.51%, 500=0.37% 00:10:19.916 lat (msec) : 50=4.12% 00:10:19.916 cpu : usr=0.30%, sys=0.80%, ctx=535, majf=0, minf=1 00:10:19.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.917 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.917 job3: (groupid=0, jobs=1): err= 0: pid=855642: Mon Dec 16 16:16:08 2024 00:10:19.917 read: IOPS=2389, BW=9558KiB/s (9788kB/s)(9568KiB/1001msec) 00:10:19.917 slat (nsec): min=7301, max=45931, avg=8361.22, stdev=1464.96 00:10:19.917 clat (usec): min=163, max=41014, avg=225.84, stdev=835.00 00:10:19.917 lat (usec): min=171, max=41023, avg=234.20, stdev=835.01 00:10:19.917 clat percentiles (usec): 00:10:19.917 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:10:19.917 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:10:19.917 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 277], 00:10:19.917 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 441], 99.95th=[ 506], 00:10:19.917 | 99.99th=[41157] 00:10:19.917 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:19.917 slat (nsec): min=10513, max=46042, avg=11620.55, stdev=1738.68 00:10:19.917 clat (usec): min=121, max=333, avg=154.42, stdev=17.52 00:10:19.917 lat (usec): min=133, max=346, avg=166.04, stdev=17.75 00:10:19.917 clat percentiles (usec): 00:10:19.917 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:10:19.917 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:10:19.917 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:10:19.917 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 273], 99.95th=[ 281], 00:10:19.917 | 99.99th=[ 334] 00:10:19.917 bw ( KiB/s): min= 9872, max= 9872, per=55.38%, avg=9872.00, stdev= 0.00, samples=1 00:10:19.917 iops : min= 2468, max= 2468, avg=2468.00, stdev= 0.00, samples=1 00:10:19.917 lat (usec) : 250=95.32%, 500=4.64%, 750=0.02% 00:10:19.917 lat (msec) : 50=0.02% 00:10:19.917 cpu : usr=4.20%, sys=7.80%, ctx=4953, majf=0, minf=1 00:10:19.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.917 issued rwts: total=2392,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.917 00:10:19.917 Run status group 0 (all jobs): 00:10:19.917 READ: bw=11.2MiB/s (11.7MB/s), 87.6KiB/s-9558KiB/s (89.7kB/s-9788kB/s), io=11.6MiB (12.1MB), run=1001-1034msec 00:10:19.917 WRITE: bw=17.4MiB/s (18.3MB/s), 2038KiB/s-9.99MiB/s (2087kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1034msec 00:10:19.917 00:10:19.917 Disk stats (read/write): 00:10:19.917 nvme0n1: ios=545/1024, merge=0/0, ticks=1109/160, in_queue=1269, util=97.19% 00:10:19.917 nvme0n2: ios=67/512, merge=0/0, ticks=761/93, in_queue=854, util=87.95% 00:10:19.917 nvme0n3: ios=41/512, merge=0/0, ticks=1683/78, in_queue=1761, util=93.41% 00:10:19.917 nvme0n4: ios=2106/2094, merge=0/0, ticks=681/303, in_queue=984, util=97.89% 00:10:19.917 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:19.917 [global] 00:10:19.917 thread=1 00:10:19.917 invalidate=1 00:10:19.917 rw=write 00:10:19.917 time_based=1 00:10:19.917 runtime=1 00:10:19.917 ioengine=libaio 00:10:19.917 direct=1 00:10:19.917 bs=4096 00:10:19.917 iodepth=128 00:10:19.917 norandommap=0 00:10:19.917 numjobs=1 00:10:19.917 00:10:19.917 verify_dump=1 00:10:19.917 verify_backlog=512 00:10:19.917 verify_state_save=0 00:10:19.917 do_verify=1 00:10:19.917 verify=crc32c-intel 00:10:19.917 [job0] 00:10:19.917 filename=/dev/nvme0n1 00:10:19.917 [job1] 00:10:19.917 filename=/dev/nvme0n2 00:10:19.917 [job2] 00:10:19.917 filename=/dev/nvme0n3 00:10:19.917 [job3] 00:10:19.917 filename=/dev/nvme0n4 00:10:19.917 Could not set queue depth (nvme0n1) 00:10:19.917 Could not set queue depth (nvme0n2) 00:10:19.917 Could not set queue depth (nvme0n3) 00:10:19.917 Could not set queue depth (nvme0n4) 00:10:20.176 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.176 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.176 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.176 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.176 fio-3.35 00:10:20.176 Starting 4 threads 00:10:21.557 00:10:21.557 job0: (groupid=0, jobs=1): err= 0: pid=856014: Mon Dec 16 16:16:09 2024 00:10:21.557 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:10:21.557 slat (nsec): min=1698, max=24795k, avg=230822.40, stdev=1322380.96 00:10:21.557 clat (usec): min=14712, max=84602, avg=27469.82, stdev=14463.20 00:10:21.557 lat (usec): min=14720, max=84610, avg=27700.64, stdev=14569.87 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[15926], 5.00th=[17695], 10.00th=[18482], 20.00th=[19792], 00:10:21.557 | 30.00th=[20579], 40.00th=[21365], 50.00th=[21890], 60.00th=[23200], 00:10:21.557 | 70.00th=[24249], 80.00th=[26608], 90.00th=[47449], 95.00th=[60556], 00:10:21.557 | 99.00th=[81265], 99.50th=[81265], 99.90th=[82314], 99.95th=[84411], 00:10:21.557 | 99.99th=[84411] 00:10:21.557 write: IOPS=1789, BW=7158KiB/s (7330kB/s)(7208KiB/1007msec); 0 zone resets 00:10:21.557 slat (usec): min=2, max=35366, avg=351.41, stdev=1984.44 00:10:21.557 clat (msec): min=5, max=128, avg=45.08, stdev=29.94 00:10:21.557 lat (msec): min=7, max=128, avg=45.43, stdev=30.12 00:10:21.557 clat percentiles (msec): 00:10:21.557 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 24], 00:10:21.557 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 42], 00:10:21.557 | 70.00th=[ 55], 80.00th=[ 79], 90.00th=[ 94], 95.00th=[ 100], 00:10:21.557 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:10:21.557 | 99.99th=[ 129] 00:10:21.557 bw ( KiB/s): min= 5200, max= 8192, per=11.15%, avg=6696.00, stdev=2115.66, samples=2 00:10:21.557 iops : min= 1300, max= 2048, avg=1674.00, stdev=528.92, samples=2 00:10:21.557 lat (msec) : 10=0.27%, 20=17.14%, 50=61.17%, 100=18.72%, 250=2.70% 00:10:21.557 cpu : usr=1.39%, sys=2.58%, ctx=234, majf=0, minf=1 00:10:21.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:10:21.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.557 issued rwts: total=1536,1802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.557 job1: (groupid=0, jobs=1): err= 0: pid=856017: Mon Dec 16 16:16:09 2024 00:10:21.557 read: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec) 00:10:21.557 slat (nsec): min=1287, max=8482.0k, avg=69846.85, stdev=497407.56 00:10:21.557 clat (usec): min=2703, max=22335, avg=8703.88, stdev=2403.71 00:10:21.557 lat (usec): min=2709, max=22346, avg=8773.73, stdev=2440.70 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[ 3523], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7177], 00:10:21.557 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8094], 00:10:21.557 | 70.00th=[ 8586], 80.00th=[11076], 90.00th=[12911], 95.00th=[13304], 00:10:21.557 | 99.00th=[14484], 99.50th=[15139], 99.90th=[18744], 99.95th=[18744], 00:10:21.557 | 99.99th=[22414] 00:10:21.557 write: IOPS=7499, BW=29.3MiB/s (30.7MB/s)(29.6MiB/1009msec); 0 zone resets 00:10:21.557 slat (nsec): min=1989, max=9680.4k, avg=59943.01, stdev=294642.00 00:10:21.557 clat (usec): min=1513, max=56242, avg=8638.64, stdev=6216.01 00:10:21.557 lat (usec): min=1528, max=56248, avg=8698.58, stdev=6258.67 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[ 2769], 5.00th=[ 3851], 10.00th=[ 4883], 20.00th=[ 6521], 00:10:21.557 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:10:21.557 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[19792], 00:10:21.557 | 99.00th=[43779], 99.50th=[47973], 99.90th=[50070], 99.95th=[56361], 00:10:21.557 | 99.99th=[56361] 00:10:21.557 bw ( KiB/s): min=26040, max=33472, per=49.53%, avg=29756.00, stdev=5255.22, samples=2 00:10:21.557 iops : min= 6510, max= 8368, avg=7439.00, stdev=1313.80, samples=2 00:10:21.557 lat (msec) : 2=0.15%, 4=3.50%, 10=79.93%, 20=14.09%, 50=2.24% 00:10:21.557 lat (msec) : 100=0.10% 00:10:21.557 cpu : usr=5.65%, sys=7.04%, ctx=929, majf=0, minf=1 00:10:21.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:21.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.557 issued rwts: total=7168,7567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.557 job2: (groupid=0, jobs=1): err= 0: pid=856018: Mon Dec 16 16:16:09 2024 00:10:21.557 read: IOPS=2795, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1006msec) 00:10:21.557 slat (nsec): min=1683, max=19403k, avg=176117.89, stdev=1221446.66 00:10:21.557 clat (usec): min=3895, max=49831, avg=20181.32, stdev=7789.34 00:10:21.557 lat (usec): min=6908, max=49840, avg=20357.44, stdev=7895.01 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[ 8356], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[12518], 00:10:21.557 | 30.00th=[15664], 40.00th=[18482], 50.00th=[20055], 60.00th=[20841], 00:10:21.557 | 70.00th=[22414], 80.00th=[24511], 90.00th=[31589], 95.00th=[35390], 00:10:21.557 | 99.00th=[44827], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:10:21.557 | 99.99th=[50070] 00:10:21.557 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:21.557 slat (usec): min=2, max=17449, avg=158.09, stdev=763.39 00:10:21.557 clat (usec): min=1567, max=49792, avg=23042.77, stdev=9252.84 00:10:21.557 lat (usec): min=1582, max=49796, avg=23200.86, stdev=9324.95 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10683], 00:10:21.557 | 30.00th=[17957], 40.00th=[22676], 50.00th=[25560], 60.00th=[26084], 00:10:21.557 | 70.00th=[28181], 80.00th=[31327], 90.00th=[33424], 95.00th=[37487], 00:10:21.557 | 99.00th=[43254], 99.50th=[43779], 99.90th=[47973], 99.95th=[49546], 00:10:21.557 | 99.99th=[49546] 00:10:21.557 bw ( KiB/s): min=12144, max=12432, per=20.45%, avg=12288.00, stdev=203.65, samples=2 00:10:21.557 iops : min= 3036, max= 3108, avg=3072.00, stdev=50.91, samples=2 00:10:21.557 lat (msec) : 2=0.03%, 4=0.12%, 10=8.72%, 20=34.36%, 50=56.76% 00:10:21.557 cpu : usr=2.89%, sys=3.88%, ctx=314, majf=0, minf=2 00:10:21.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:21.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.557 issued rwts: total=2812,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.557 job3: (groupid=0, jobs=1): err= 0: pid=856019: Mon Dec 16 16:16:09 2024 00:10:21.557 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:10:21.557 slat (nsec): min=1431, max=20299k, avg=171212.39, stdev=1171034.23 00:10:21.557 clat (usec): min=5204, max=56294, avg=19672.33, stdev=10360.62 00:10:21.557 lat (usec): min=5212, max=56305, avg=19843.54, stdev=10446.81 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[ 7767], 5.00th=[ 8094], 10.00th=[ 8225], 20.00th=[10159], 00:10:21.557 | 30.00th=[13304], 40.00th=[17433], 50.00th=[18744], 60.00th=[19530], 00:10:21.557 | 70.00th=[22414], 80.00th=[23200], 90.00th=[36439], 95.00th=[42730], 00:10:21.557 | 99.00th=[52167], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:10:21.557 | 99.99th=[56361] 00:10:21.557 write: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(10.8MiB/1012msec); 0 zone resets 00:10:21.557 slat (usec): min=3, max=27523, avg=196.85, stdev=1020.54 00:10:21.557 clat (usec): min=2338, max=94044, avg=28197.91, stdev=13767.63 00:10:21.557 lat (usec): min=2345, max=94057, avg=28394.75, stdev=13841.88 00:10:21.557 clat percentiles (usec): 00:10:21.557 | 1.00th=[ 4555], 5.00th=[10421], 10.00th=[15926], 20.00th=[19006], 00:10:21.557 | 30.00th=[23987], 40.00th=[25822], 50.00th=[26346], 60.00th=[29230], 00:10:21.557 | 70.00th=[31065], 80.00th=[32637], 90.00th=[36439], 95.00th=[54264], 00:10:21.558 | 99.00th=[89654], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:10:21.558 | 99.99th=[93848] 00:10:21.558 bw ( KiB/s): min= 8960, max=12080, per=17.51%, avg=10520.00, stdev=2206.17, samples=2 00:10:21.558 iops : min= 2240, max= 3020, avg=2630.00, stdev=551.54, samples=2 00:10:21.558 lat (msec) : 4=0.38%, 10=11.15%, 20=30.44%, 50=54.72%, 100=3.31% 00:10:21.558 cpu : usr=2.67%, sys=3.66%, ctx=313, majf=0, minf=1 00:10:21.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:21.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.558 issued rwts: total=2560,2758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.558 00:10:21.558 Run status group 0 (all jobs): 00:10:21.558 READ: bw=54.3MiB/s (57.0MB/s), 6101KiB/s-27.8MiB/s (6248kB/s-29.1MB/s), io=55.0MiB (57.7MB), run=1006-1012msec 00:10:21.558 WRITE: bw=58.7MiB/s (61.5MB/s), 7158KiB/s-29.3MiB/s (7330kB/s-30.7MB/s), io=59.4MiB (62.3MB), run=1006-1012msec 00:10:21.558 00:10:21.558 Disk stats (read/write): 00:10:21.558 nvme0n1: ios=1076/1536, merge=0/0, ticks=11683/24423, in_queue=36106, util=97.29% 00:10:21.558 nvme0n2: ios=6690/7106, merge=0/0, ticks=52874/48096, in_queue=100970, util=99.19% 00:10:21.558 nvme0n3: ios=2536/2560, merge=0/0, ticks=49996/55309, in_queue=105305, util=88.87% 00:10:21.558 nvme0n4: ios=2091/2151, merge=0/0, ticks=44867/55843, in_queue=100710, util=96.54% 00:10:21.558 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:21.558 [global] 00:10:21.558 thread=1 00:10:21.558 invalidate=1 00:10:21.558 rw=randwrite 00:10:21.558 time_based=1 00:10:21.558 runtime=1 00:10:21.558 ioengine=libaio 00:10:21.558 direct=1 00:10:21.558 bs=4096 00:10:21.558 iodepth=128 00:10:21.558 norandommap=0 00:10:21.558 numjobs=1 00:10:21.558 00:10:21.558 verify_dump=1 00:10:21.558 verify_backlog=512 00:10:21.558 verify_state_save=0 00:10:21.558 do_verify=1 00:10:21.558 verify=crc32c-intel 00:10:21.558 [job0] 00:10:21.558 filename=/dev/nvme0n1 00:10:21.558 [job1] 00:10:21.558 filename=/dev/nvme0n2 00:10:21.558 [job2] 00:10:21.558 filename=/dev/nvme0n3 00:10:21.558 [job3] 00:10:21.558 filename=/dev/nvme0n4 00:10:21.558 Could not set queue depth (nvme0n1) 00:10:21.558 Could not set queue depth (nvme0n2) 00:10:21.558 Could not set queue depth (nvme0n3) 00:10:21.558 Could not set queue depth (nvme0n4) 00:10:21.558 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.558 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.558 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.558 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.558 fio-3.35 00:10:21.558 Starting 4 threads 00:10:22.939 00:10:22.939 job0: (groupid=0, jobs=1): err= 0: pid=856380: Mon Dec 16 16:16:11 2024 00:10:22.939 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:22.939 slat (nsec): min=1478, max=9351.3k, avg=78497.63, stdev=431216.95 00:10:22.939 clat (usec): min=7330, max=20635, avg=10243.66, stdev=1505.96 00:10:22.939 lat (usec): min=7339, max=20641, avg=10322.15, stdev=1540.18 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9634], 00:10:22.939 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:10:22.939 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[11731], 00:10:22.939 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20579], 99.95th=[20579], 00:10:22.939 | 99.99th=[20579] 00:10:22.939 write: IOPS=6111, BW=23.9MiB/s (25.0MB/s)(23.9MiB/1003msec); 0 zone resets 00:10:22.939 slat (usec): min=2, max=44964, avg=85.03, stdev=727.70 00:10:22.939 clat (usec): min=544, max=69999, avg=11334.49, stdev=7842.58 00:10:22.939 lat (usec): min=557, max=70010, avg=11419.52, stdev=7867.10 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 4293], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[ 9634], 00:10:22.939 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:10:22.939 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[17695], 00:10:22.939 | 99.00th=[66847], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:10:22.939 | 99.99th=[69731] 00:10:22.939 bw ( KiB/s): min=21440, max=26584, per=33.51%, avg=24012.00, stdev=3637.36, samples=2 00:10:22.939 iops : min= 5360, max= 6646, avg=6003.00, stdev=909.34, samples=2 00:10:22.939 lat (usec) : 750=0.03% 00:10:22.939 lat (msec) : 2=0.06%, 4=0.33%, 10=52.24%, 20=45.13%, 50=1.14% 00:10:22.939 lat (msec) : 100=1.08% 00:10:22.939 cpu : usr=4.59%, sys=7.39%, ctx=584, majf=0, minf=1 00:10:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.939 issued rwts: total=5632,6130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.939 job1: (groupid=0, jobs=1): err= 0: pid=856381: Mon Dec 16 16:16:11 2024 00:10:22.939 read: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1007msec) 00:10:22.939 slat (nsec): min=1685, max=20579k, avg=128299.94, stdev=904727.57 00:10:22.939 clat (usec): min=5546, max=56453, avg=15465.61, stdev=6153.98 00:10:22.939 lat (usec): min=8699, max=56476, avg=15593.91, stdev=6227.79 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[10159], 5.00th=[11207], 10.00th=[12387], 20.00th=[12649], 00:10:22.939 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:10:22.939 | 70.00th=[14222], 80.00th=[15139], 90.00th=[22938], 95.00th=[30016], 00:10:22.939 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[45876], 00:10:22.939 | 99.99th=[56361] 00:10:22.939 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:10:22.939 slat (usec): min=2, max=17844, avg=160.38, stdev=968.15 00:10:22.939 clat (usec): min=6836, max=60859, avg=22061.34, stdev=11634.79 00:10:22.939 lat (usec): min=6845, max=60862, avg=22221.72, stdev=11713.40 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11076], 20.00th=[11994], 00:10:22.939 | 30.00th=[12780], 40.00th=[18744], 50.00th=[20841], 60.00th=[21365], 00:10:22.939 | 70.00th=[23725], 80.00th=[31065], 90.00th=[35390], 95.00th=[49546], 00:10:22.939 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:10:22.939 | 99.99th=[61080] 00:10:22.939 bw ( KiB/s): min=13448, max=14640, per=19.60%, avg=14044.00, stdev=842.87, samples=2 00:10:22.939 iops : min= 3362, max= 3660, avg=3511.00, stdev=210.72, samples=2 00:10:22.939 lat (msec) : 10=0.88%, 20=64.10%, 50=32.35%, 100=2.67% 00:10:22.939 cpu : usr=2.58%, sys=5.47%, ctx=325, majf=0, minf=2 00:10:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.939 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.939 job2: (groupid=0, jobs=1): err= 0: pid=856382: Mon Dec 16 16:16:11 2024 00:10:22.939 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:10:22.939 slat (nsec): min=1135, max=11276k, avg=84274.84, stdev=680265.32 00:10:22.939 clat (usec): min=3473, max=23860, avg=11505.30, stdev=2987.31 00:10:22.939 lat (usec): min=3478, max=23866, avg=11589.57, stdev=3038.50 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 4228], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[ 9503], 00:10:22.939 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:10:22.939 | 70.00th=[11600], 80.00th=[13042], 90.00th=[15270], 95.00th=[17957], 00:10:22.939 | 99.00th=[20579], 99.50th=[21103], 99.90th=[23725], 99.95th=[23987], 00:10:22.939 | 99.99th=[23987] 00:10:22.939 write: IOPS=5952, BW=23.3MiB/s (24.4MB/s)(23.5MiB/1009msec); 0 zone resets 00:10:22.939 slat (nsec): min=1957, max=9380.8k, avg=68585.33, stdev=416397.75 00:10:22.939 clat (usec): min=2059, max=55564, avg=10519.24, stdev=3634.14 00:10:22.939 lat (usec): min=2099, max=55566, avg=10587.82, stdev=3659.25 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 3621], 5.00th=[ 5538], 10.00th=[ 7242], 20.00th=[ 8586], 00:10:22.939 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:10:22.939 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11994], 00:10:22.939 | 99.00th=[22152], 99.50th=[33424], 99.90th=[48497], 99.95th=[53216], 00:10:22.939 | 99.99th=[55313] 00:10:22.939 bw ( KiB/s): min=22456, max=24576, per=32.82%, avg=23516.00, stdev=1499.07, samples=2 00:10:22.939 iops : min= 5614, max= 6144, avg=5879.00, stdev=374.77, samples=2 00:10:22.939 lat (msec) : 4=1.36%, 10=27.92%, 20=69.16%, 50=1.51%, 100=0.05% 00:10:22.939 cpu : usr=3.47%, sys=5.85%, ctx=592, majf=0, minf=1 00:10:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.939 issued rwts: total=5632,6006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.939 job3: (groupid=0, jobs=1): err= 0: pid=856383: Mon Dec 16 16:16:11 2024 00:10:22.939 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.5MiB/1049msec) 00:10:22.939 slat (nsec): min=1434, max=15115k, avg=155479.38, stdev=976789.83 00:10:22.939 clat (usec): min=5586, max=75327, avg=18925.20, stdev=13166.53 00:10:22.939 lat (usec): min=5592, max=75339, avg=19080.68, stdev=13231.95 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 6259], 5.00th=[10421], 10.00th=[12256], 20.00th=[12518], 00:10:22.939 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13566], 60.00th=[15401], 00:10:22.939 | 70.00th=[15664], 80.00th=[19530], 90.00th=[36439], 95.00th=[55837], 00:10:22.939 | 99.00th=[71828], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:10:22.939 | 99.99th=[74974] 00:10:22.939 write: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1049msec); 0 zone resets 00:10:22.939 slat (usec): min=2, max=16729, avg=184.65, stdev=830.66 00:10:22.939 clat (usec): min=2923, max=75343, avg=26700.88, stdev=13486.78 00:10:22.939 lat (usec): min=2934, max=75355, avg=26885.53, stdev=13584.09 00:10:22.939 clat percentiles (usec): 00:10:22.939 | 1.00th=[ 4490], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[16909], 00:10:22.939 | 30.00th=[19006], 40.00th=[20841], 50.00th=[21103], 60.00th=[22414], 00:10:22.939 | 70.00th=[34341], 80.00th=[42730], 90.00th=[47973], 95.00th=[50594], 00:10:22.939 | 99.00th=[56361], 99.50th=[57934], 99.90th=[58983], 99.95th=[59507], 00:10:22.939 | 99.99th=[74974] 00:10:22.939 bw ( KiB/s): min=12272, max=12288, per=17.14%, avg=12280.00, stdev=11.31, samples=2 00:10:22.939 iops : min= 3068, max= 3072, avg=3070.00, stdev= 2.83, samples=2 00:10:22.940 lat (msec) : 4=0.35%, 10=3.11%, 20=53.00%, 50=37.63%, 100=5.90% 00:10:22.940 cpu : usr=2.48%, sys=4.10%, ctx=368, majf=0, minf=1 00:10:22.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:22.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.940 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.940 00:10:22.940 Run status group 0 (all jobs): 00:10:22.940 READ: bw=63.6MiB/s (66.7MB/s), 10.0MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=66.7MiB (69.9MB), run=1003-1049msec 00:10:22.940 WRITE: bw=70.0MiB/s (73.4MB/s), 11.4MiB/s-23.9MiB/s (12.0MB/s-25.0MB/s), io=73.4MiB (77.0MB), run=1003-1049msec 00:10:22.940 00:10:22.940 Disk stats (read/write): 00:10:22.940 nvme0n1: ios=4657/4674, merge=0/0, ticks=15820/18998, in_queue=34818, util=89.38% 00:10:22.940 nvme0n2: ios=2610/3031, merge=0/0, ticks=17141/30433, in_queue=47574, util=85.10% 00:10:22.940 nvme0n3: ios=4667/4767, merge=0/0, ticks=50331/48122, in_queue=98453, util=97.84% 00:10:22.940 nvme0n4: ios=2093/2399, merge=0/0, ticks=32487/66875, in_queue=99362, util=100.00% 00:10:22.940 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:22.940 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=856605 00:10:22.940 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:22.940 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:22.940 [global] 00:10:22.940 thread=1 00:10:22.940 invalidate=1 00:10:22.940 rw=read 00:10:22.940 time_based=1 00:10:22.940 runtime=10 00:10:22.940 ioengine=libaio 00:10:22.940 direct=1 00:10:22.940 bs=4096 00:10:22.940 iodepth=1 00:10:22.940 norandommap=1 00:10:22.940 numjobs=1 00:10:22.940 00:10:22.940 [job0] 00:10:22.940 filename=/dev/nvme0n1 00:10:22.940 [job1] 00:10:22.940 filename=/dev/nvme0n2 00:10:22.940 [job2] 00:10:22.940 filename=/dev/nvme0n3 00:10:22.940 [job3] 00:10:22.940 filename=/dev/nvme0n4 00:10:22.940 Could not set queue depth (nvme0n1) 00:10:22.940 Could not set queue depth (nvme0n2) 00:10:22.940 Could not set queue depth (nvme0n3) 00:10:22.940 Could not set queue depth (nvme0n4) 00:10:23.199 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.199 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.199 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.199 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.199 fio-3.35 00:10:23.199 Starting 4 threads 00:10:26.490 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:26.490 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:26.490 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=286720, buflen=4096 00:10:26.490 fio: pid=856756, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:26.490 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:10:26.490 fio: pid=856755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:26.490 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.490 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:26.490 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1331200, buflen=4096 00:10:26.490 fio: pid=856753, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:26.490 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.490 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:26.748 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.748 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:26.748 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=45674496, buflen=4096 00:10:26.748 fio: pid=856754, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:26.748 00:10:26.748 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=856753: Mon Dec 16 16:16:15 2024 00:10:26.748 read: IOPS=103, BW=413KiB/s (423kB/s)(1300KiB/3147msec) 00:10:26.748 slat (usec): min=6, max=29720, avg=102.38, stdev=1645.47 00:10:26.748 clat (usec): min=165, max=42937, avg=9547.66, stdev=17169.04 00:10:26.748 lat (usec): min=172, max=70902, avg=9650.34, stdev=17420.85 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 192], 00:10:26.748 | 30.00th=[ 202], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:10:26.748 | 70.00th=[ 375], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:26.748 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:26.748 | 99.99th=[42730] 00:10:26.748 bw ( KiB/s): min= 96, max= 757, per=1.55%, avg=212.83, stdev=266.73, samples=6 00:10:26.748 iops : min= 24, max= 189, avg=53.17, stdev=66.58, samples=6 00:10:26.748 lat (usec) : 250=63.80%, 500=13.19% 00:10:26.748 lat (msec) : 50=22.70% 00:10:26.748 cpu : usr=0.00%, sys=0.22%, ctx=330, majf=0, minf=1 00:10:26.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 issued rwts: total=326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.748 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=856754: Mon Dec 16 16:16:15 2024 00:10:26.748 read: IOPS=3289, BW=12.8MiB/s (13.5MB/s)(43.6MiB/3390msec) 00:10:26.748 slat (usec): min=6, max=10746, avg=11.24, stdev=171.15 00:10:26.748 clat (usec): min=182, max=44501, avg=290.91, stdev=1247.73 00:10:26.748 lat (usec): min=192, max=44526, avg=301.52, stdev=1271.18 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:10:26.748 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:10:26.748 | 70.00th=[ 258], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 269], 00:10:26.748 | 99.00th=[ 383], 99.50th=[ 420], 99.90th=[ 9241], 99.95th=[41157], 00:10:26.748 | 99.99th=[42206] 00:10:26.748 bw ( KiB/s): min=10245, max=15512, per=100.00%, avg=14508.83, stdev=2108.72, samples=6 00:10:26.748 iops : min= 2561, max= 3878, avg=3627.17, stdev=527.28, samples=6 00:10:26.748 lat (usec) : 250=52.47%, 500=47.35%, 750=0.04% 00:10:26.748 lat (msec) : 10=0.03%, 20=0.01%, 50=0.09% 00:10:26.748 cpu : usr=1.53%, sys=5.61%, ctx=11157, majf=0, minf=2 00:10:26.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 issued rwts: total=11152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.748 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=856755: Mon Dec 16 16:16:15 2024 00:10:26.748 read: IOPS=24, BW=98.0KiB/s (100kB/s)(288KiB/2939msec) 00:10:26.748 slat (nsec): min=10366, max=32579, avg=14950.21, stdev=4137.91 00:10:26.748 clat (usec): min=427, max=44019, avg=40502.32, stdev=4806.91 00:10:26.748 lat (usec): min=459, max=44030, avg=40517.15, stdev=4804.76 00:10:26.748 clat percentiles (usec): 00:10:26.748 | 1.00th=[ 429], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:26.748 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:26.748 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:26.748 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:26.748 | 99.99th=[43779] 00:10:26.748 bw ( KiB/s): min= 96, max= 104, per=0.71%, avg=97.60, stdev= 3.58, samples=5 00:10:26.748 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:26.748 lat (usec) : 500=1.37% 00:10:26.748 lat (msec) : 50=97.26% 00:10:26.748 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=2 00:10:26.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.748 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.749 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=856756: Mon Dec 16 16:16:15 2024 00:10:26.749 read: IOPS=25, BW=102KiB/s (104kB/s)(280KiB/2745msec) 00:10:26.749 slat (nsec): min=7952, max=34655, avg=14524.65, stdev=6665.36 00:10:26.749 clat (usec): min=469, max=42012, avg=38890.09, stdev=9525.06 00:10:26.749 lat (usec): min=490, max=42030, avg=38904.67, stdev=9522.12 00:10:26.749 clat percentiles (usec): 00:10:26.749 | 1.00th=[ 469], 5.00th=[ 545], 10.00th=[41157], 20.00th=[41157], 00:10:26.749 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:26.749 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:26.749 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:26.749 | 99.99th=[42206] 00:10:26.749 bw ( KiB/s): min= 96, max= 112, per=0.73%, avg=100.80, stdev= 7.16, samples=5 00:10:26.749 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:10:26.749 lat (usec) : 500=2.82%, 750=2.82% 00:10:26.749 lat (msec) : 50=92.96% 00:10:26.749 cpu : usr=0.07%, sys=0.00%, ctx=71, majf=0, minf=2 00:10:26.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.749 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.749 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.749 00:10:26.749 Run status group 0 (all jobs): 00:10:26.749 READ: bw=13.4MiB/s (14.0MB/s), 98.0KiB/s-12.8MiB/s (100kB/s-13.5MB/s), io=45.4MiB (47.6MB), run=2745-3390msec 00:10:26.749 00:10:26.749 Disk stats (read/write): 00:10:26.749 nvme0n1: ios=214/0, merge=0/0, ticks=3984/0, in_queue=3984, util=98.31% 00:10:26.749 nvme0n2: ios=11150/0, merge=0/0, ticks=3079/0, in_queue=3079, util=95.63% 00:10:26.749 nvme0n3: ios=70/0, merge=0/0, ticks=2835/0, in_queue=2835, util=96.55% 00:10:26.749 nvme0n4: ios=67/0, merge=0/0, ticks=2601/0, in_queue=2601, util=96.45% 00:10:27.007 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.007 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:27.266 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.266 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:27.525 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.525 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:27.525 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:27.525 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:27.785 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:27.785 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 856605 00:10:27.785 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:27.785 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:28.044 nvmf hotplug test: fio failed as expected 00:10:28.044 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.304 rmmod nvme_tcp 00:10:28.304 rmmod nvme_fabrics 00:10:28.304 rmmod nvme_keyring 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 853801 ']' 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 853801 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 853801 ']' 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 853801 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 853801 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 853801' 00:10:28.304 killing process with pid 853801 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 853801 00:10:28.304 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 853801 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.564 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.473 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.473 00:10:30.473 real 0m27.023s 00:10:30.473 user 1m47.685s 00:10:30.473 sys 0m8.277s 00:10:30.473 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.473 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.473 ************************************ 00:10:30.473 END TEST nvmf_fio_target 00:10:30.473 ************************************ 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.733 ************************************ 00:10:30.733 START TEST nvmf_bdevio 00:10:30.733 ************************************ 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.733 * Looking for test storage... 00:10:30.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.733 --rc genhtml_branch_coverage=1 00:10:30.733 --rc genhtml_function_coverage=1 00:10:30.733 --rc genhtml_legend=1 00:10:30.733 --rc geninfo_all_blocks=1 00:10:30.733 --rc geninfo_unexecuted_blocks=1 00:10:30.733 00:10:30.733 ' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.733 --rc genhtml_branch_coverage=1 00:10:30.733 --rc genhtml_function_coverage=1 00:10:30.733 --rc genhtml_legend=1 00:10:30.733 --rc geninfo_all_blocks=1 00:10:30.733 --rc geninfo_unexecuted_blocks=1 00:10:30.733 00:10:30.733 ' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.733 --rc genhtml_branch_coverage=1 00:10:30.733 --rc genhtml_function_coverage=1 00:10:30.733 --rc genhtml_legend=1 00:10:30.733 --rc geninfo_all_blocks=1 00:10:30.733 --rc geninfo_unexecuted_blocks=1 00:10:30.733 00:10:30.733 ' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.733 --rc genhtml_branch_coverage=1 00:10:30.733 --rc genhtml_function_coverage=1 00:10:30.733 --rc genhtml_legend=1 00:10:30.733 --rc geninfo_all_blocks=1 00:10:30.733 --rc geninfo_unexecuted_blocks=1 00:10:30.733 00:10:30.733 ' 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.733 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.734 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.310 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:37.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:37.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:37.311 Found net devices under 0000:af:00.0: cvl_0_0 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:37.311 Found net devices under 0000:af:00.1: cvl_0_1 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.311 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:10:37.311 00:10:37.311 --- 10.0.0.2 ping statistics --- 00:10:37.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.311 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:37.311 00:10:37.311 --- 10.0.0.1 ping statistics --- 00:10:37.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.311 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=861132 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 861132 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 861132 ']' 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.311 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.311 [2024-12-16 16:16:25.304669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:37.311 [2024-12-16 16:16:25.304719] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.311 [2024-12-16 16:16:25.385516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.311 [2024-12-16 16:16:25.408041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.311 [2024-12-16 16:16:25.408079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.311 [2024-12-16 16:16:25.408085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.311 [2024-12-16 16:16:25.408091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.311 [2024-12-16 16:16:25.408100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.311 [2024-12-16 16:16:25.409630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:37.311 [2024-12-16 16:16:25.409741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:37.311 [2024-12-16 16:16:25.409851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.312 [2024-12-16 16:16:25.409852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.312 [2024-12-16 16:16:25.549281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.312 Malloc0 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.312 [2024-12-16 16:16:25.618988] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:37.312 { 00:10:37.312 "params": { 00:10:37.312 "name": "Nvme$subsystem", 00:10:37.312 "trtype": "$TEST_TRANSPORT", 00:10:37.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.312 "adrfam": "ipv4", 00:10:37.312 "trsvcid": "$NVMF_PORT", 00:10:37.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.312 "hdgst": ${hdgst:-false}, 00:10:37.312 "ddgst": ${ddgst:-false} 00:10:37.312 }, 00:10:37.312 "method": "bdev_nvme_attach_controller" 00:10:37.312 } 00:10:37.312 EOF 00:10:37.312 )") 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:37.312 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:37.312 "params": { 00:10:37.312 "name": "Nvme1", 00:10:37.312 "trtype": "tcp", 00:10:37.312 "traddr": "10.0.0.2", 00:10:37.312 "adrfam": "ipv4", 00:10:37.312 "trsvcid": "4420", 00:10:37.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.312 "hdgst": false, 00:10:37.312 "ddgst": false 00:10:37.312 }, 00:10:37.312 "method": "bdev_nvme_attach_controller" 00:10:37.312 }' 00:10:37.312 [2024-12-16 16:16:25.667484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:37.312 [2024-12-16 16:16:25.667524] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861157 ] 00:10:37.312 [2024-12-16 16:16:25.741982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.312 [2024-12-16 16:16:25.767249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.312 [2024-12-16 16:16:25.767298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.312 [2024-12-16 16:16:25.767298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.570 I/O targets: 00:10:37.570 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:37.570 00:10:37.570 00:10:37.570 CUnit - A unit testing framework for C - Version 2.1-3 00:10:37.570 http://cunit.sourceforge.net/ 00:10:37.570 00:10:37.570 00:10:37.570 Suite: bdevio tests on: Nvme1n1 00:10:37.570 Test: blockdev write read block ...passed 00:10:37.570 Test: blockdev write zeroes read block ...passed 00:10:37.570 Test: blockdev write zeroes read no split ...passed 00:10:37.570 Test: blockdev write zeroes read split ...passed 00:10:37.570 Test: blockdev write zeroes read split partial ...passed 00:10:37.570 Test: blockdev reset ...[2024-12-16 16:16:26.151137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:37.570 [2024-12-16 16:16:26.151207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822340 (9): Bad file descriptor 00:10:37.829 [2024-12-16 16:16:26.206907] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:37.829 passed 00:10:37.829 Test: blockdev write read 8 blocks ...passed 00:10:37.829 Test: blockdev write read size > 128k ...passed 00:10:37.829 Test: blockdev write read invalid size ...passed 00:10:37.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:37.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:37.829 Test: blockdev write read max offset ...passed 00:10:37.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:37.829 Test: blockdev writev readv 8 blocks ...passed 00:10:37.829 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.088 Test: blockdev writev readv block ...passed 00:10:38.088 Test: blockdev writev readv size > 128k ...passed 00:10:38.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.088 Test: blockdev comparev and writev ...[2024-12-16 16:16:26.501134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.501957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:38.088 [2024-12-16 16:16:26.501964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:38.088 passed 00:10:38.088 Test: blockdev nvme passthru rw ...passed 00:10:38.088 Test: blockdev nvme passthru vendor specific ...[2024-12-16 16:16:26.585438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.088 [2024-12-16 16:16:26.585453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.585562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.088 [2024-12-16 16:16:26.585571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.585688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.088 [2024-12-16 16:16:26.585698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:38.088 [2024-12-16 16:16:26.585818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:38.088 [2024-12-16 16:16:26.585828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:38.088 passed 00:10:38.088 Test: blockdev nvme admin passthru ...passed 00:10:38.088 Test: blockdev copy ...passed 00:10:38.088 00:10:38.088 Run Summary: Type Total Ran Passed Failed Inactive 00:10:38.088 suites 1 1 n/a 0 0 00:10:38.088 tests 23 23 23 0 0 00:10:38.088 asserts 152 152 152 0 n/a 00:10:38.088 00:10:38.088 Elapsed time = 1.221 seconds 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.347 rmmod nvme_tcp 00:10:38.347 rmmod nvme_fabrics 00:10:38.347 rmmod nvme_keyring 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 861132 ']' 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 861132 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 861132 ']' 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 861132 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 861132 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 861132' 00:10:38.347 killing process with pid 861132 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 861132 00:10:38.347 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 861132 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.607 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.602 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.602 00:10:40.602 real 0m10.046s 00:10:40.602 user 0m10.786s 00:10:40.602 sys 0m4.921s 00:10:40.602 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.602 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 ************************************ 00:10:40.602 END TEST nvmf_bdevio 00:10:40.602 ************************************ 00:10:40.602 16:16:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:40.602 00:10:40.602 real 4m33.829s 00:10:40.602 user 10m24.197s 00:10:40.602 sys 1m38.252s 00:10:40.602 16:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.602 16:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 ************************************ 00:10:40.602 END TEST nvmf_target_core 00:10:40.602 ************************************ 00:10:40.887 16:16:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:40.887 16:16:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.887 16:16:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.887 16:16:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.887 ************************************ 00:10:40.887 START TEST nvmf_target_extra 00:10:40.887 ************************************ 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:40.887 * Looking for test storage... 00:10:40.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.887 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.888 --rc genhtml_branch_coverage=1 00:10:40.888 --rc genhtml_function_coverage=1 00:10:40.888 --rc genhtml_legend=1 00:10:40.888 --rc geninfo_all_blocks=1 00:10:40.888 --rc geninfo_unexecuted_blocks=1 00:10:40.888 00:10:40.888 ' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.888 --rc genhtml_branch_coverage=1 00:10:40.888 --rc genhtml_function_coverage=1 00:10:40.888 --rc genhtml_legend=1 00:10:40.888 --rc geninfo_all_blocks=1 00:10:40.888 --rc geninfo_unexecuted_blocks=1 00:10:40.888 00:10:40.888 ' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.888 --rc genhtml_branch_coverage=1 00:10:40.888 --rc genhtml_function_coverage=1 00:10:40.888 --rc genhtml_legend=1 00:10:40.888 --rc geninfo_all_blocks=1 00:10:40.888 --rc geninfo_unexecuted_blocks=1 00:10:40.888 00:10:40.888 ' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.888 --rc genhtml_branch_coverage=1 00:10:40.888 --rc genhtml_function_coverage=1 00:10:40.888 --rc genhtml_legend=1 00:10:40.888 --rc geninfo_all_blocks=1 00:10:40.888 --rc geninfo_unexecuted_blocks=1 00:10:40.888 00:10:40.888 ' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.888 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.148 ************************************ 00:10:41.148 START TEST nvmf_example 00:10:41.148 ************************************ 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:41.148 * Looking for test storage... 00:10:41.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.148 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.149 --rc genhtml_branch_coverage=1 00:10:41.149 --rc genhtml_function_coverage=1 00:10:41.149 --rc genhtml_legend=1 00:10:41.149 --rc geninfo_all_blocks=1 00:10:41.149 --rc geninfo_unexecuted_blocks=1 00:10:41.149 00:10:41.149 ' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.149 --rc genhtml_branch_coverage=1 00:10:41.149 --rc genhtml_function_coverage=1 00:10:41.149 --rc genhtml_legend=1 00:10:41.149 --rc geninfo_all_blocks=1 00:10:41.149 --rc geninfo_unexecuted_blocks=1 00:10:41.149 00:10:41.149 ' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.149 --rc genhtml_branch_coverage=1 00:10:41.149 --rc genhtml_function_coverage=1 00:10:41.149 --rc genhtml_legend=1 00:10:41.149 --rc geninfo_all_blocks=1 00:10:41.149 --rc geninfo_unexecuted_blocks=1 00:10:41.149 00:10:41.149 ' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.149 --rc genhtml_branch_coverage=1 00:10:41.149 --rc genhtml_function_coverage=1 00:10:41.149 --rc genhtml_legend=1 00:10:41.149 --rc geninfo_all_blocks=1 00:10:41.149 --rc geninfo_unexecuted_blocks=1 00:10:41.149 00:10:41.149 ' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.149 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.150 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.716 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:47.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:47.717 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:47.717 Found net devices under 0000:af:00.0: cvl_0_0 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:47.717 Found net devices under 0000:af:00.1: cvl_0_1 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:10:47.717 00:10:47.717 --- 10.0.0.2 ping statistics --- 00:10:47.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.717 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:47.717 00:10:47.717 --- 10.0.0.1 ping statistics --- 00:10:47.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.717 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=865130 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 865130 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 865130 ']' 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.717 16:16:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.717 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.717 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.717 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.717 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:47.718 16:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:57.695 Initializing NVMe Controllers 00:10:57.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:57.695 Initialization complete. Launching workers. 00:10:57.695 ======================================================== 00:10:57.695 Latency(us) 00:10:57.695 Device Information : IOPS MiB/s Average min max 00:10:57.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18568.50 72.53 3448.52 679.84 15662.87 00:10:57.695 ======================================================== 00:10:57.695 Total : 18568.50 72.53 3448.52 679.84 15662.87 00:10:57.695 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.695 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.695 rmmod nvme_tcp 00:10:57.954 rmmod nvme_fabrics 00:10:57.954 rmmod nvme_keyring 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 865130 ']' 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 865130 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 865130 ']' 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 865130 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865130 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865130' 00:10:57.954 killing process with pid 865130 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 865130 00:10:57.954 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 865130 00:10:57.954 nvmf threads initialize successfully 00:10:57.954 bdev subsystem init successfully 00:10:57.954 created a nvmf target service 00:10:57.954 create targets's poll groups done 00:10:57.954 all subsystems of target started 00:10:57.954 nvmf target is running 00:10:57.954 all subsystems of target stopped 00:10:57.954 destroy targets's poll groups done 00:10:57.954 destroyed the nvmf target service 00:10:57.954 bdev subsystem finish successfully 00:10:57.954 nvmf threads destroy successfully 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.213 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.214 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.214 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.119 00:11:00.119 real 0m19.161s 00:11:00.119 user 0m43.229s 00:11:00.119 sys 0m5.981s 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.119 ************************************ 00:11:00.119 END TEST nvmf_example 00:11:00.119 ************************************ 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.119 16:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.379 ************************************ 00:11:00.379 START TEST nvmf_filesystem 00:11:00.379 ************************************ 00:11:00.379 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:00.379 * Looking for test storage... 00:11:00.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.379 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.379 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.379 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.380 --rc genhtml_branch_coverage=1 00:11:00.380 --rc genhtml_function_coverage=1 00:11:00.380 --rc genhtml_legend=1 00:11:00.380 --rc geninfo_all_blocks=1 00:11:00.380 --rc geninfo_unexecuted_blocks=1 00:11:00.380 00:11:00.380 ' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.380 --rc genhtml_branch_coverage=1 00:11:00.380 --rc genhtml_function_coverage=1 00:11:00.380 --rc genhtml_legend=1 00:11:00.380 --rc geninfo_all_blocks=1 00:11:00.380 --rc geninfo_unexecuted_blocks=1 00:11:00.380 00:11:00.380 ' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.380 --rc genhtml_branch_coverage=1 00:11:00.380 --rc genhtml_function_coverage=1 00:11:00.380 --rc genhtml_legend=1 00:11:00.380 --rc geninfo_all_blocks=1 00:11:00.380 --rc geninfo_unexecuted_blocks=1 00:11:00.380 00:11:00.380 ' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.380 --rc genhtml_branch_coverage=1 00:11:00.380 --rc genhtml_function_coverage=1 00:11:00.380 --rc genhtml_legend=1 00:11:00.380 --rc geninfo_all_blocks=1 00:11:00.380 --rc geninfo_unexecuted_blocks=1 00:11:00.380 00:11:00.380 ' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:00.380 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:00.381 #define SPDK_CONFIG_H 00:11:00.381 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:00.381 #define SPDK_CONFIG_APPS 1 00:11:00.381 #define SPDK_CONFIG_ARCH native 00:11:00.381 #undef SPDK_CONFIG_ASAN 00:11:00.381 #undef SPDK_CONFIG_AVAHI 00:11:00.381 #undef SPDK_CONFIG_CET 00:11:00.381 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:00.381 #define SPDK_CONFIG_COVERAGE 1 00:11:00.381 #define SPDK_CONFIG_CROSS_PREFIX 00:11:00.381 #undef SPDK_CONFIG_CRYPTO 00:11:00.381 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:00.381 #undef SPDK_CONFIG_CUSTOMOCF 00:11:00.381 #undef SPDK_CONFIG_DAOS 00:11:00.381 #define SPDK_CONFIG_DAOS_DIR 00:11:00.381 #define SPDK_CONFIG_DEBUG 1 00:11:00.381 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:00.381 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:00.381 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:00.381 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:00.381 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:00.381 #undef SPDK_CONFIG_DPDK_UADK 00:11:00.381 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:00.381 #define SPDK_CONFIG_EXAMPLES 1 00:11:00.381 #undef SPDK_CONFIG_FC 00:11:00.381 #define SPDK_CONFIG_FC_PATH 00:11:00.381 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:00.381 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:00.381 #define SPDK_CONFIG_FSDEV 1 00:11:00.381 #undef SPDK_CONFIG_FUSE 00:11:00.381 #undef SPDK_CONFIG_FUZZER 00:11:00.381 #define SPDK_CONFIG_FUZZER_LIB 00:11:00.381 #undef SPDK_CONFIG_GOLANG 00:11:00.381 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:00.381 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:00.381 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:00.381 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:00.381 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:00.381 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:00.381 #undef SPDK_CONFIG_HAVE_LZ4 00:11:00.381 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:00.381 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:00.381 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:00.381 #define SPDK_CONFIG_IDXD 1 00:11:00.381 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:00.381 #undef SPDK_CONFIG_IPSEC_MB 00:11:00.381 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:00.381 #define SPDK_CONFIG_ISAL 1 00:11:00.381 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:00.381 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:00.381 #define SPDK_CONFIG_LIBDIR 00:11:00.381 #undef SPDK_CONFIG_LTO 00:11:00.381 #define SPDK_CONFIG_MAX_LCORES 128 00:11:00.381 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:00.381 #define SPDK_CONFIG_NVME_CUSE 1 00:11:00.381 #undef SPDK_CONFIG_OCF 00:11:00.381 #define SPDK_CONFIG_OCF_PATH 00:11:00.381 #define SPDK_CONFIG_OPENSSL_PATH 00:11:00.381 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:00.381 #define SPDK_CONFIG_PGO_DIR 00:11:00.381 #undef SPDK_CONFIG_PGO_USE 00:11:00.381 #define SPDK_CONFIG_PREFIX /usr/local 00:11:00.381 #undef SPDK_CONFIG_RAID5F 00:11:00.381 #undef SPDK_CONFIG_RBD 00:11:00.381 #define SPDK_CONFIG_RDMA 1 00:11:00.381 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:00.381 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:00.381 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:00.381 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:00.381 #define SPDK_CONFIG_SHARED 1 00:11:00.381 #undef SPDK_CONFIG_SMA 00:11:00.381 #define SPDK_CONFIG_TESTS 1 00:11:00.381 #undef SPDK_CONFIG_TSAN 00:11:00.381 #define SPDK_CONFIG_UBLK 1 00:11:00.381 #define SPDK_CONFIG_UBSAN 1 00:11:00.381 #undef SPDK_CONFIG_UNIT_TESTS 00:11:00.381 #undef SPDK_CONFIG_URING 00:11:00.381 #define SPDK_CONFIG_URING_PATH 00:11:00.381 #undef SPDK_CONFIG_URING_ZNS 00:11:00.381 #undef SPDK_CONFIG_USDT 00:11:00.381 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:00.381 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:00.381 #define SPDK_CONFIG_VFIO_USER 1 00:11:00.381 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:00.381 #define SPDK_CONFIG_VHOST 1 00:11:00.381 #define SPDK_CONFIG_VIRTIO 1 00:11:00.381 #undef SPDK_CONFIG_VTUNE 00:11:00.381 #define SPDK_CONFIG_VTUNE_DIR 00:11:00.381 #define SPDK_CONFIG_WERROR 1 00:11:00.381 #define SPDK_CONFIG_WPDK_DIR 00:11:00.381 #undef SPDK_CONFIG_XNVME 00:11:00.381 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.381 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:00.382 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:00.644 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:00.645 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:00.645 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:00.645 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:00.645 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 867270 ]] 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 867270 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.QAG5P6 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QAG5P6/tests/target /tmp/spdk.QAG5P6 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:00.646 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88101519360 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552389120 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7450869760 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766163456 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776194560 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087462400 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110477824 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23015424 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775985664 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776194560 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=208896 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:00.647 * Looking for test storage... 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88101519360 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9665462272 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.647 --rc genhtml_branch_coverage=1 00:11:00.647 --rc genhtml_function_coverage=1 00:11:00.647 --rc genhtml_legend=1 00:11:00.647 --rc geninfo_all_blocks=1 00:11:00.647 --rc geninfo_unexecuted_blocks=1 00:11:00.647 00:11:00.647 ' 00:11:00.647 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.647 --rc genhtml_branch_coverage=1 00:11:00.647 --rc genhtml_function_coverage=1 00:11:00.648 --rc genhtml_legend=1 00:11:00.648 --rc geninfo_all_blocks=1 00:11:00.648 --rc geninfo_unexecuted_blocks=1 00:11:00.648 00:11:00.648 ' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.648 --rc genhtml_branch_coverage=1 00:11:00.648 --rc genhtml_function_coverage=1 00:11:00.648 --rc genhtml_legend=1 00:11:00.648 --rc geninfo_all_blocks=1 00:11:00.648 --rc geninfo_unexecuted_blocks=1 00:11:00.648 00:11:00.648 ' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.648 --rc genhtml_branch_coverage=1 00:11:00.648 --rc genhtml_function_coverage=1 00:11:00.648 --rc genhtml_legend=1 00:11:00.648 --rc geninfo_all_blocks=1 00:11:00.648 --rc geninfo_unexecuted_blocks=1 00:11:00.648 00:11:00.648 ' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.648 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.218 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.218 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.218 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.218 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.218 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.218 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:07.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:07.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:07.219 Found net devices under 0000:af:00.0: cvl_0_0 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:07.219 Found net devices under 0000:af:00.1: cvl_0_1 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.219 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:11:07.219 00:11:07.219 --- 10.0.0.2 ping statistics --- 00:11:07.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.219 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:07.219 00:11:07.219 --- 10.0.0.1 ping statistics --- 00:11:07.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.219 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.219 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 ************************************ 00:11:07.220 START TEST nvmf_filesystem_no_in_capsule 00:11:07.220 ************************************ 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=870468 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 870468 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 870468 ']' 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 [2024-12-16 16:16:55.325428] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:07.220 [2024-12-16 16:16:55.325472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.220 [2024-12-16 16:16:55.405872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.220 [2024-12-16 16:16:55.429262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.220 [2024-12-16 16:16:55.429299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.220 [2024-12-16 16:16:55.429308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.220 [2024-12-16 16:16:55.429316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.220 [2024-12-16 16:16:55.429322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.220 [2024-12-16 16:16:55.430666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.220 [2024-12-16 16:16:55.430776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.220 [2024-12-16 16:16:55.430863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.220 [2024-12-16 16:16:55.430864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 [2024-12-16 16:16:55.563561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 [2024-12-16 16:16:55.715192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:07.220 { 00:11:07.220 "name": "Malloc1", 00:11:07.220 "aliases": [ 00:11:07.220 "c9e6de1a-2550-4a46-83a7-6503144363c8" 00:11:07.220 ], 00:11:07.220 "product_name": "Malloc disk", 00:11:07.220 "block_size": 512, 00:11:07.220 "num_blocks": 1048576, 00:11:07.220 "uuid": "c9e6de1a-2550-4a46-83a7-6503144363c8", 00:11:07.220 "assigned_rate_limits": { 00:11:07.220 "rw_ios_per_sec": 0, 00:11:07.220 "rw_mbytes_per_sec": 0, 00:11:07.220 "r_mbytes_per_sec": 0, 00:11:07.220 "w_mbytes_per_sec": 0 00:11:07.220 }, 00:11:07.220 "claimed": true, 00:11:07.220 "claim_type": "exclusive_write", 00:11:07.220 "zoned": false, 00:11:07.220 "supported_io_types": { 00:11:07.220 "read": true, 00:11:07.220 "write": true, 00:11:07.220 "unmap": true, 00:11:07.220 "flush": true, 00:11:07.220 "reset": true, 00:11:07.220 "nvme_admin": false, 00:11:07.220 "nvme_io": false, 00:11:07.220 "nvme_io_md": false, 00:11:07.220 "write_zeroes": true, 00:11:07.220 "zcopy": true, 00:11:07.220 "get_zone_info": false, 00:11:07.220 "zone_management": false, 00:11:07.220 "zone_append": false, 00:11:07.220 "compare": false, 00:11:07.220 "compare_and_write": false, 00:11:07.220 "abort": true, 00:11:07.220 "seek_hole": false, 00:11:07.220 "seek_data": false, 00:11:07.220 "copy": true, 00:11:07.220 "nvme_iov_md": false 00:11:07.220 }, 00:11:07.220 "memory_domains": [ 00:11:07.220 { 00:11:07.220 "dma_device_id": "system", 00:11:07.220 "dma_device_type": 1 00:11:07.220 }, 00:11:07.220 { 00:11:07.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.220 "dma_device_type": 2 00:11:07.220 } 00:11:07.220 ], 00:11:07.220 "driver_specific": {} 00:11:07.220 } 00:11:07.220 ]' 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:07.220 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:07.221 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:07.479 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:07.479 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:07.479 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:07.479 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:07.479 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.415 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.415 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:08.415 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.415 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:08.415 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:10.947 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:10.948 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:10.948 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:11.515 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.451 ************************************ 00:11:12.451 START TEST filesystem_ext4 00:11:12.451 ************************************ 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:12.451 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:12.451 mke2fs 1.47.0 (5-Feb-2023) 00:11:12.451 Discarding device blocks: 0/522240 done 00:11:12.451 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:12.451 Filesystem UUID: c80545e2-c8a7-409e-b0cd-cbd7cf9720fb 00:11:12.451 Superblock backups stored on blocks: 00:11:12.451 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:12.451 00:11:12.451 Allocating group tables: 0/64 done 00:11:12.451 Writing inode tables: 0/64 done 00:11:13.019 Creating journal (8192 blocks): done 00:11:14.728 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:14.728 00:11:14.728 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:14.728 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 870468 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.293 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.293 00:11:21.293 real 0m8.388s 00:11:21.294 user 0m0.036s 00:11:21.294 sys 0m0.068s 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:21.294 ************************************ 00:11:21.294 END TEST filesystem_ext4 00:11:21.294 ************************************ 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.294 ************************************ 00:11:21.294 START TEST filesystem_btrfs 00:11:21.294 ************************************ 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:21.294 btrfs-progs v6.8.1 00:11:21.294 See https://btrfs.readthedocs.io for more information. 00:11:21.294 00:11:21.294 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:21.294 NOTE: several default settings have changed in version 5.15, please make sure 00:11:21.294 this does not affect your deployments: 00:11:21.294 - DUP for metadata (-m dup) 00:11:21.294 - enabled no-holes (-O no-holes) 00:11:21.294 - enabled free-space-tree (-R free-space-tree) 00:11:21.294 00:11:21.294 Label: (null) 00:11:21.294 UUID: 06eaa319-5824-471c-8ce2-b39612f044c9 00:11:21.294 Node size: 16384 00:11:21.294 Sector size: 4096 (CPU page size: 4096) 00:11:21.294 Filesystem size: 510.00MiB 00:11:21.294 Block group profiles: 00:11:21.294 Data: single 8.00MiB 00:11:21.294 Metadata: DUP 32.00MiB 00:11:21.294 System: DUP 8.00MiB 00:11:21.294 SSD detected: yes 00:11:21.294 Zoned device: no 00:11:21.294 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:21.294 Checksum: crc32c 00:11:21.294 Number of devices: 1 00:11:21.294 Devices: 00:11:21.294 ID SIZE PATH 00:11:21.294 1 510.00MiB /dev/nvme0n1p1 00:11:21.294 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:21.294 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 870468 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.553 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.553 00:11:21.553 real 0m0.662s 00:11:21.553 user 0m0.032s 00:11:21.553 sys 0m0.110s 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.554 ************************************ 00:11:21.554 END TEST filesystem_btrfs 00:11:21.554 ************************************ 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.554 ************************************ 00:11:21.554 START TEST filesystem_xfs 00:11:21.554 ************************************ 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:21.554 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:21.554 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:21.554 = sectsz=512 attr=2, projid32bit=1 00:11:21.554 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:21.554 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:21.554 data = bsize=4096 blocks=130560, imaxpct=25 00:11:21.554 = sunit=0 swidth=0 blks 00:11:21.554 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:21.554 log =internal log bsize=4096 blocks=16384, version=2 00:11:21.554 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:21.554 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:22.490 Discarding blocks...Done. 00:11:22.490 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:22.490 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 870468 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.393 00:11:24.393 real 0m2.883s 00:11:24.393 user 0m0.029s 00:11:24.393 sys 0m0.071s 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:24.393 ************************************ 00:11:24.393 END TEST filesystem_xfs 00:11:24.393 ************************************ 00:11:24.393 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 870468 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 870468 ']' 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 870468 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 870468 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 870468' 00:11:24.652 killing process with pid 870468 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 870468 00:11:24.652 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 870468 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:25.221 00:11:25.221 real 0m18.296s 00:11:25.221 user 1m12.079s 00:11:25.221 sys 0m1.421s 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.221 ************************************ 00:11:25.221 END TEST nvmf_filesystem_no_in_capsule 00:11:25.221 ************************************ 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.221 ************************************ 00:11:25.221 START TEST nvmf_filesystem_in_capsule 00:11:25.221 ************************************ 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=873619 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 873619 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 873619 ']' 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.221 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.221 [2024-12-16 16:17:13.684090] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:25.221 [2024-12-16 16:17:13.684139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.221 [2024-12-16 16:17:13.762909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.221 [2024-12-16 16:17:13.783213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.221 [2024-12-16 16:17:13.783251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.221 [2024-12-16 16:17:13.783260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.221 [2024-12-16 16:17:13.783267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.221 [2024-12-16 16:17:13.783273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.221 [2024-12-16 16:17:13.784695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.221 [2024-12-16 16:17:13.784804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.221 [2024-12-16 16:17:13.784889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.221 [2024-12-16 16:17:13.784891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.480 [2024-12-16 16:17:13.925286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.480 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.480 Malloc1 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 [2024-12-16 16:17:14.078261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:25.481 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:25.739 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.739 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.739 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.739 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:25.739 { 00:11:25.739 "name": "Malloc1", 00:11:25.739 "aliases": [ 00:11:25.739 "f8d509c0-d2a4-4770-a957-73c1323535a5" 00:11:25.739 ], 00:11:25.739 "product_name": "Malloc disk", 00:11:25.739 "block_size": 512, 00:11:25.739 "num_blocks": 1048576, 00:11:25.739 "uuid": "f8d509c0-d2a4-4770-a957-73c1323535a5", 00:11:25.739 "assigned_rate_limits": { 00:11:25.739 "rw_ios_per_sec": 0, 00:11:25.739 "rw_mbytes_per_sec": 0, 00:11:25.739 "r_mbytes_per_sec": 0, 00:11:25.739 "w_mbytes_per_sec": 0 00:11:25.739 }, 00:11:25.739 "claimed": true, 00:11:25.739 "claim_type": "exclusive_write", 00:11:25.739 "zoned": false, 00:11:25.739 "supported_io_types": { 00:11:25.739 "read": true, 00:11:25.739 "write": true, 00:11:25.739 "unmap": true, 00:11:25.739 "flush": true, 00:11:25.739 "reset": true, 00:11:25.739 "nvme_admin": false, 00:11:25.739 "nvme_io": false, 00:11:25.739 "nvme_io_md": false, 00:11:25.739 "write_zeroes": true, 00:11:25.739 "zcopy": true, 00:11:25.739 "get_zone_info": false, 00:11:25.739 "zone_management": false, 00:11:25.739 "zone_append": false, 00:11:25.739 "compare": false, 00:11:25.739 "compare_and_write": false, 00:11:25.739 "abort": true, 00:11:25.739 "seek_hole": false, 00:11:25.739 "seek_data": false, 00:11:25.739 "copy": true, 00:11:25.739 "nvme_iov_md": false 00:11:25.739 }, 00:11:25.739 "memory_domains": [ 00:11:25.739 { 00:11:25.739 "dma_device_id": "system", 00:11:25.739 "dma_device_type": 1 00:11:25.739 }, 00:11:25.739 { 00:11:25.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.739 "dma_device_type": 2 00:11:25.739 } 00:11:25.739 ], 00:11:25.739 "driver_specific": {} 00:11:25.739 } 00:11:25.739 ]' 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:25.740 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.114 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.114 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.114 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.114 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.114 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:29.017 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:29.275 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:29.275 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.652 ************************************ 00:11:30.652 START TEST filesystem_in_capsule_ext4 00:11:30.652 ************************************ 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:30.652 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:30.652 mke2fs 1.47.0 (5-Feb-2023) 00:11:30.652 Discarding device blocks: 0/522240 done 00:11:30.652 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:30.652 Filesystem UUID: 2b85fc84-e931-4887-bca9-de98aa067d67 00:11:30.652 Superblock backups stored on blocks: 00:11:30.652 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:30.652 00:11:30.652 Allocating group tables: 0/64 done 00:11:30.652 Writing inode tables: 0/64 done 00:11:30.652 Creating journal (8192 blocks): done 00:11:32.931 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:32.931 00:11:32.931 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:32.931 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 873619 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.492 00:11:39.492 real 0m8.339s 00:11:39.492 user 0m0.022s 00:11:39.492 sys 0m0.079s 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:39.492 ************************************ 00:11:39.492 END TEST filesystem_in_capsule_ext4 00:11:39.492 ************************************ 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.492 ************************************ 00:11:39.492 START TEST filesystem_in_capsule_btrfs 00:11:39.492 ************************************ 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:39.492 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:39.492 btrfs-progs v6.8.1 00:11:39.492 See https://btrfs.readthedocs.io for more information. 00:11:39.492 00:11:39.492 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:39.492 NOTE: several default settings have changed in version 5.15, please make sure 00:11:39.492 this does not affect your deployments: 00:11:39.492 - DUP for metadata (-m dup) 00:11:39.492 - enabled no-holes (-O no-holes) 00:11:39.492 - enabled free-space-tree (-R free-space-tree) 00:11:39.492 00:11:39.492 Label: (null) 00:11:39.492 UUID: 4f25be22-de02-4bf5-9b68-0f0e34361a43 00:11:39.492 Node size: 16384 00:11:39.492 Sector size: 4096 (CPU page size: 4096) 00:11:39.492 Filesystem size: 510.00MiB 00:11:39.492 Block group profiles: 00:11:39.492 Data: single 8.00MiB 00:11:39.492 Metadata: DUP 32.00MiB 00:11:39.492 System: DUP 8.00MiB 00:11:39.492 SSD detected: yes 00:11:39.492 Zoned device: no 00:11:39.493 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:39.493 Checksum: crc32c 00:11:39.493 Number of devices: 1 00:11:39.493 Devices: 00:11:39.493 ID SIZE PATH 00:11:39.493 1 510.00MiB /dev/nvme0n1p1 00:11:39.493 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 873619 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.493 00:11:39.493 real 0m0.580s 00:11:39.493 user 0m0.030s 00:11:39.493 sys 0m0.109s 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.493 ************************************ 00:11:39.493 END TEST filesystem_in_capsule_btrfs 00:11:39.493 ************************************ 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.493 ************************************ 00:11:39.493 START TEST filesystem_in_capsule_xfs 00:11:39.493 ************************************ 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:39.493 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:39.493 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:39.493 = sectsz=512 attr=2, projid32bit=1 00:11:39.493 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:39.493 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:39.493 data = bsize=4096 blocks=130560, imaxpct=25 00:11:39.493 = sunit=0 swidth=0 blks 00:11:39.493 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:39.493 log =internal log bsize=4096 blocks=16384, version=2 00:11:39.493 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:39.493 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:40.428 Discarding blocks...Done. 00:11:40.428 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:40.428 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 873619 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.330 00:11:42.330 real 0m2.639s 00:11:42.330 user 0m0.026s 00:11:42.330 sys 0m0.075s 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:42.330 ************************************ 00:11:42.330 END TEST filesystem_in_capsule_xfs 00:11:42.330 ************************************ 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 873619 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 873619 ']' 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 873619 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 873619 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.330 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.331 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 873619' 00:11:42.331 killing process with pid 873619 00:11:42.331 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 873619 00:11:42.331 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 873619 00:11:42.590 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.590 00:11:42.590 real 0m17.545s 00:11:42.590 user 1m9.079s 00:11:42.590 sys 0m1.422s 00:11:42.590 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.590 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.590 ************************************ 00:11:42.590 END TEST nvmf_filesystem_in_capsule 00:11:42.590 ************************************ 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.849 rmmod nvme_tcp 00:11:42.849 rmmod nvme_fabrics 00:11:42.849 rmmod nvme_keyring 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.849 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.751 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.751 00:11:44.751 real 0m44.607s 00:11:44.751 user 2m23.191s 00:11:44.751 sys 0m7.597s 00:11:44.751 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.751 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.751 ************************************ 00:11:44.751 END TEST nvmf_filesystem 00:11:44.751 ************************************ 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.010 ************************************ 00:11:45.010 START TEST nvmf_target_discovery 00:11:45.010 ************************************ 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:45.010 * Looking for test storage... 00:11:45.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.010 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.011 --rc genhtml_branch_coverage=1 00:11:45.011 --rc genhtml_function_coverage=1 00:11:45.011 --rc genhtml_legend=1 00:11:45.011 --rc geninfo_all_blocks=1 00:11:45.011 --rc geninfo_unexecuted_blocks=1 00:11:45.011 00:11:45.011 ' 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.011 --rc genhtml_branch_coverage=1 00:11:45.011 --rc genhtml_function_coverage=1 00:11:45.011 --rc genhtml_legend=1 00:11:45.011 --rc geninfo_all_blocks=1 00:11:45.011 --rc geninfo_unexecuted_blocks=1 00:11:45.011 00:11:45.011 ' 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.011 --rc genhtml_branch_coverage=1 00:11:45.011 --rc genhtml_function_coverage=1 00:11:45.011 --rc genhtml_legend=1 00:11:45.011 --rc geninfo_all_blocks=1 00:11:45.011 --rc geninfo_unexecuted_blocks=1 00:11:45.011 00:11:45.011 ' 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.011 --rc genhtml_branch_coverage=1 00:11:45.011 --rc genhtml_function_coverage=1 00:11:45.011 --rc genhtml_legend=1 00:11:45.011 --rc geninfo_all_blocks=1 00:11:45.011 --rc geninfo_unexecuted_blocks=1 00:11:45.011 00:11:45.011 ' 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.011 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.271 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:51.839 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.839 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:51.840 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:51.840 Found net devices under 0000:af:00.0: cvl_0_0 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:51.840 Found net devices under 0000:af:00.1: cvl_0_1 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:11:51.840 00:11:51.840 --- 10.0.0.2 ping statistics --- 00:11:51.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.840 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:11:51.840 00:11:51.840 --- 10.0.0.1 ping statistics --- 00:11:51.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.840 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=880210 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 880210 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 880210 ']' 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.840 [2024-12-16 16:17:39.697648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:51.840 [2024-12-16 16:17:39.697695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.840 [2024-12-16 16:17:39.778871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.840 [2024-12-16 16:17:39.802043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.840 [2024-12-16 16:17:39.802081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.840 [2024-12-16 16:17:39.802091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.840 [2024-12-16 16:17:39.802113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.840 [2024-12-16 16:17:39.802119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.840 [2024-12-16 16:17:39.803453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.840 [2024-12-16 16:17:39.803564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.840 [2024-12-16 16:17:39.803674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.840 [2024-12-16 16:17:39.803675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.840 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 [2024-12-16 16:17:39.936624] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 Null1 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 [2024-12-16 16:17:39.993227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 Null2 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 Null3 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 Null4 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.841 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:51.841 00:11:51.841 Discovery Log Number of Records 6, Generation counter 6 00:11:51.841 =====Discovery Log Entry 0====== 00:11:51.841 trtype: tcp 00:11:51.841 adrfam: ipv4 00:11:51.841 subtype: current discovery subsystem 00:11:51.841 treq: not required 00:11:51.841 portid: 0 00:11:51.841 trsvcid: 4420 00:11:51.841 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:51.841 traddr: 10.0.0.2 00:11:51.841 eflags: explicit discovery connections, duplicate discovery information 00:11:51.841 sectype: none 00:11:51.841 =====Discovery Log Entry 1====== 00:11:51.841 trtype: tcp 00:11:51.841 adrfam: ipv4 00:11:51.841 subtype: nvme subsystem 00:11:51.841 treq: not required 00:11:51.841 portid: 0 00:11:51.841 trsvcid: 4420 00:11:51.841 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:51.841 traddr: 10.0.0.2 00:11:51.841 eflags: none 00:11:51.841 sectype: none 00:11:51.841 =====Discovery Log Entry 2====== 00:11:51.841 trtype: tcp 00:11:51.841 adrfam: ipv4 00:11:51.841 subtype: nvme subsystem 00:11:51.841 treq: not required 00:11:51.841 portid: 0 00:11:51.841 trsvcid: 4420 00:11:51.841 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:51.841 traddr: 10.0.0.2 00:11:51.841 eflags: none 00:11:51.842 sectype: none 00:11:51.842 =====Discovery Log Entry 3====== 00:11:51.842 trtype: tcp 00:11:51.842 adrfam: ipv4 00:11:51.842 subtype: nvme subsystem 00:11:51.842 treq: not required 00:11:51.842 portid: 0 00:11:51.842 trsvcid: 4420 00:11:51.842 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:51.842 traddr: 10.0.0.2 00:11:51.842 eflags: none 00:11:51.842 sectype: none 00:11:51.842 =====Discovery Log Entry 4====== 00:11:51.842 trtype: tcp 00:11:51.842 adrfam: ipv4 00:11:51.842 subtype: nvme subsystem 00:11:51.842 treq: not required 00:11:51.842 portid: 0 00:11:51.842 trsvcid: 4420 00:11:51.842 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:51.842 traddr: 10.0.0.2 00:11:51.842 eflags: none 00:11:51.842 sectype: none 00:11:51.842 =====Discovery Log Entry 5====== 00:11:51.842 trtype: tcp 00:11:51.842 adrfam: ipv4 00:11:51.842 subtype: discovery subsystem referral 00:11:51.842 treq: not required 00:11:51.842 portid: 0 00:11:51.842 trsvcid: 4430 00:11:51.842 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:51.842 traddr: 10.0.0.2 00:11:51.842 eflags: none 00:11:51.842 sectype: none 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:51.842 Perform nvmf subsystem discovery via RPC 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 [ 00:11:51.842 { 00:11:51.842 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:51.842 "subtype": "Discovery", 00:11:51.842 "listen_addresses": [ 00:11:51.842 { 00:11:51.842 "trtype": "TCP", 00:11:51.842 "adrfam": "IPv4", 00:11:51.842 "traddr": "10.0.0.2", 00:11:51.842 "trsvcid": "4420" 00:11:51.842 } 00:11:51.842 ], 00:11:51.842 "allow_any_host": true, 00:11:51.842 "hosts": [] 00:11:51.842 }, 00:11:51.842 { 00:11:51.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.842 "subtype": "NVMe", 00:11:51.842 "listen_addresses": [ 00:11:51.842 { 00:11:51.842 "trtype": "TCP", 00:11:51.842 "adrfam": "IPv4", 00:11:51.842 "traddr": "10.0.0.2", 00:11:51.842 "trsvcid": "4420" 00:11:51.842 } 00:11:51.842 ], 00:11:51.842 "allow_any_host": true, 00:11:51.842 "hosts": [], 00:11:51.842 "serial_number": "SPDK00000000000001", 00:11:51.842 "model_number": "SPDK bdev Controller", 00:11:51.842 "max_namespaces": 32, 00:11:51.842 "min_cntlid": 1, 00:11:51.842 "max_cntlid": 65519, 00:11:51.842 "namespaces": [ 00:11:51.842 { 00:11:51.842 "nsid": 1, 00:11:51.842 "bdev_name": "Null1", 00:11:51.842 "name": "Null1", 00:11:51.842 "nguid": "664D0FE3384447B2B600F48155EC3BC5", 00:11:51.842 "uuid": "664d0fe3-3844-47b2-b600-f48155ec3bc5" 00:11:51.842 } 00:11:51.842 ] 00:11:51.842 }, 00:11:51.842 { 00:11:51.842 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:51.842 "subtype": "NVMe", 00:11:51.842 "listen_addresses": [ 00:11:51.842 { 00:11:51.842 "trtype": "TCP", 00:11:51.842 "adrfam": "IPv4", 00:11:51.842 "traddr": "10.0.0.2", 00:11:51.842 "trsvcid": "4420" 00:11:51.842 } 00:11:51.842 ], 00:11:51.842 "allow_any_host": true, 00:11:51.842 "hosts": [], 00:11:51.842 "serial_number": "SPDK00000000000002", 00:11:51.842 "model_number": "SPDK bdev Controller", 00:11:51.842 "max_namespaces": 32, 00:11:51.842 "min_cntlid": 1, 00:11:51.842 "max_cntlid": 65519, 00:11:51.842 "namespaces": [ 00:11:51.842 { 00:11:51.842 "nsid": 1, 00:11:51.842 "bdev_name": "Null2", 00:11:51.842 "name": "Null2", 00:11:51.842 "nguid": "5DBD87CDCB2E4CCFBFB994FC6EB0482B", 00:11:51.842 "uuid": "5dbd87cd-cb2e-4ccf-bfb9-94fc6eb0482b" 00:11:51.842 } 00:11:51.842 ] 00:11:51.842 }, 00:11:51.842 { 00:11:51.842 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:51.842 "subtype": "NVMe", 00:11:51.842 "listen_addresses": [ 00:11:51.842 { 00:11:51.842 "trtype": "TCP", 00:11:51.842 "adrfam": "IPv4", 00:11:51.842 "traddr": "10.0.0.2", 00:11:51.842 "trsvcid": "4420" 00:11:51.842 } 00:11:51.842 ], 00:11:51.842 "allow_any_host": true, 00:11:51.842 "hosts": [], 00:11:51.842 "serial_number": "SPDK00000000000003", 00:11:51.842 "model_number": "SPDK bdev Controller", 00:11:51.842 "max_namespaces": 32, 00:11:51.842 "min_cntlid": 1, 00:11:51.842 "max_cntlid": 65519, 00:11:51.842 "namespaces": [ 00:11:51.842 { 00:11:51.842 "nsid": 1, 00:11:51.842 "bdev_name": "Null3", 00:11:51.842 "name": "Null3", 00:11:51.842 "nguid": "4D564B4E90D64C3BA7411E566BB185B6", 00:11:51.842 "uuid": "4d564b4e-90d6-4c3b-a741-1e566bb185b6" 00:11:51.842 } 00:11:51.842 ] 00:11:51.842 }, 00:11:51.842 { 00:11:51.842 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:51.842 "subtype": "NVMe", 00:11:51.842 "listen_addresses": [ 00:11:51.842 { 00:11:51.842 "trtype": "TCP", 00:11:51.842 "adrfam": "IPv4", 00:11:51.842 "traddr": "10.0.0.2", 00:11:51.842 "trsvcid": "4420" 00:11:51.842 } 00:11:51.842 ], 00:11:51.842 "allow_any_host": true, 00:11:51.842 "hosts": [], 00:11:51.842 "serial_number": "SPDK00000000000004", 00:11:51.842 "model_number": "SPDK bdev Controller", 00:11:51.842 "max_namespaces": 32, 00:11:51.842 "min_cntlid": 1, 00:11:51.842 "max_cntlid": 65519, 00:11:51.842 "namespaces": [ 00:11:51.842 { 00:11:51.842 "nsid": 1, 00:11:51.842 "bdev_name": "Null4", 00:11:51.842 "name": "Null4", 00:11:51.842 "nguid": "4BA59B01ACAD4FC68845F01890CDFE7E", 00:11:51.842 "uuid": "4ba59b01-acad-4fc6-8845-f01890cdfe7e" 00:11:51.842 } 00:11:51.842 ] 00:11:51.842 } 00:11:51.842 ] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:51.842 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.843 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.104 rmmod nvme_tcp 00:11:52.104 rmmod nvme_fabrics 00:11:52.104 rmmod nvme_keyring 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 880210 ']' 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 880210 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 880210 ']' 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 880210 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 880210 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 880210' 00:11:52.104 killing process with pid 880210 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 880210 00:11:52.104 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 880210 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.362 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.263 00:11:54.263 real 0m9.397s 00:11:54.263 user 0m5.617s 00:11:54.263 sys 0m4.864s 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.263 ************************************ 00:11:54.263 END TEST nvmf_target_discovery 00:11:54.263 ************************************ 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.263 16:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.522 ************************************ 00:11:54.522 START TEST nvmf_referrals 00:11:54.522 ************************************ 00:11:54.522 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:54.522 * Looking for test storage... 00:11:54.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.522 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:54.522 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:54.522 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:54.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.522 --rc genhtml_branch_coverage=1 00:11:54.522 --rc genhtml_function_coverage=1 00:11:54.522 --rc genhtml_legend=1 00:11:54.522 --rc geninfo_all_blocks=1 00:11:54.522 --rc geninfo_unexecuted_blocks=1 00:11:54.522 00:11:54.522 ' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:54.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.522 --rc genhtml_branch_coverage=1 00:11:54.522 --rc genhtml_function_coverage=1 00:11:54.522 --rc genhtml_legend=1 00:11:54.522 --rc geninfo_all_blocks=1 00:11:54.522 --rc geninfo_unexecuted_blocks=1 00:11:54.522 00:11:54.522 ' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:54.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.522 --rc genhtml_branch_coverage=1 00:11:54.522 --rc genhtml_function_coverage=1 00:11:54.522 --rc genhtml_legend=1 00:11:54.522 --rc geninfo_all_blocks=1 00:11:54.522 --rc geninfo_unexecuted_blocks=1 00:11:54.522 00:11:54.522 ' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:54.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.522 --rc genhtml_branch_coverage=1 00:11:54.522 --rc genhtml_function_coverage=1 00:11:54.522 --rc genhtml_legend=1 00:11:54.522 --rc geninfo_all_blocks=1 00:11:54.522 --rc geninfo_unexecuted_blocks=1 00:11:54.522 00:11:54.522 ' 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.522 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.523 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:01.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:01.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:01.092 Found net devices under 0000:af:00.0: cvl_0_0 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:01.092 Found net devices under 0000:af:00.1: cvl_0_1 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.092 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.093 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:12:01.093 00:12:01.093 --- 10.0.0.2 ping statistics --- 00:12:01.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.093 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:01.093 00:12:01.093 --- 10.0.0.1 ping statistics --- 00:12:01.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.093 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=883920 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 883920 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 883920 ']' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 [2024-12-16 16:17:49.109961] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:01.093 [2024-12-16 16:17:49.110016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.093 [2024-12-16 16:17:49.190229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.093 [2024-12-16 16:17:49.213864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.093 [2024-12-16 16:17:49.213899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.093 [2024-12-16 16:17:49.213908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.093 [2024-12-16 16:17:49.213915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.093 [2024-12-16 16:17:49.213921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.093 [2024-12-16 16:17:49.215250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.093 [2024-12-16 16:17:49.215357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.093 [2024-12-16 16:17:49.215468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.093 [2024-12-16 16:17:49.215468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 [2024-12-16 16:17:49.347139] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 [2024-12-16 16:17:49.368233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.093 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.094 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.094 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:01.094 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.094 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.094 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.352 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.353 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.611 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:01.611 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.611 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:01.612 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.612 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.612 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.612 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.612 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.612 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.870 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.129 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:02.388 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:02.388 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:02.388 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:02.388 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:02.388 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.388 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.646 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.906 rmmod nvme_tcp 00:12:02.906 rmmod nvme_fabrics 00:12:02.906 rmmod nvme_keyring 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 883920 ']' 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 883920 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 883920 ']' 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 883920 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883920 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883920' 00:12:02.906 killing process with pid 883920 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 883920 00:12:02.906 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 883920 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.165 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.069 00:12:05.069 real 0m10.718s 00:12:05.069 user 0m11.868s 00:12:05.069 sys 0m5.151s 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.069 ************************************ 00:12:05.069 END TEST nvmf_referrals 00:12:05.069 ************************************ 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.069 16:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.330 ************************************ 00:12:05.330 START TEST nvmf_connect_disconnect 00:12:05.330 ************************************ 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.330 * Looking for test storage... 00:12:05.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.330 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.331 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:11.899 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:11.899 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:11.900 Found net devices under 0000:af:00.0: cvl_0_0 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:11.900 Found net devices under 0000:af:00.1: cvl_0_1 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:12:11.900 00:12:11.900 --- 10.0.0.2 ping statistics --- 00:12:11.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.900 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:11.900 00:12:11.900 --- 10.0.0.1 ping statistics --- 00:12:11.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.900 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=887833 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 887833 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 887833 ']' 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.900 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.900 [2024-12-16 16:17:59.916365] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:11.900 [2024-12-16 16:17:59.916411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.900 [2024-12-16 16:17:59.994474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.900 [2024-12-16 16:18:00.019931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.900 [2024-12-16 16:18:00.019966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.900 [2024-12-16 16:18:00.019976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.900 [2024-12-16 16:18:00.019985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.900 [2024-12-16 16:18:00.019993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.900 [2024-12-16 16:18:00.021404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.900 [2024-12-16 16:18:00.021516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.900 [2024-12-16 16:18:00.021600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.900 [2024-12-16 16:18:00.021601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.900 [2024-12-16 16:18:00.167233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.900 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.901 [2024-12-16 16:18:00.228719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:11.901 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:14.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.601 [2024-12-16 16:21:18.982041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1730210 is same with the state(6) to be set 00:15:30.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.011 rmmod nvme_tcp 00:16:03.011 rmmod nvme_fabrics 00:16:03.011 rmmod nvme_keyring 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 887833 ']' 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 887833 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 887833 ']' 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 887833 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.011 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 887833 00:16:03.270 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 887833' 00:16:03.271 killing process with pid 887833 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 887833 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 887833 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.271 16:21:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:05.806 00:16:05.806 real 4m0.203s 00:16:05.806 user 15m17.723s 00:16:05.806 sys 0m24.678s 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:05.806 ************************************ 00:16:05.806 END TEST nvmf_connect_disconnect 00:16:05.806 ************************************ 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.806 ************************************ 00:16:05.806 START TEST nvmf_multitarget 00:16:05.806 ************************************ 00:16:05.806 16:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:05.806 * Looking for test storage... 00:16:05.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.806 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:05.806 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:05.806 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:05.806 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:05.806 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:05.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.807 --rc genhtml_branch_coverage=1 00:16:05.807 --rc genhtml_function_coverage=1 00:16:05.807 --rc genhtml_legend=1 00:16:05.807 --rc geninfo_all_blocks=1 00:16:05.807 --rc geninfo_unexecuted_blocks=1 00:16:05.807 00:16:05.807 ' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:05.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.807 --rc genhtml_branch_coverage=1 00:16:05.807 --rc genhtml_function_coverage=1 00:16:05.807 --rc genhtml_legend=1 00:16:05.807 --rc geninfo_all_blocks=1 00:16:05.807 --rc geninfo_unexecuted_blocks=1 00:16:05.807 00:16:05.807 ' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:05.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.807 --rc genhtml_branch_coverage=1 00:16:05.807 --rc genhtml_function_coverage=1 00:16:05.807 --rc genhtml_legend=1 00:16:05.807 --rc geninfo_all_blocks=1 00:16:05.807 --rc geninfo_unexecuted_blocks=1 00:16:05.807 00:16:05.807 ' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:05.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.807 --rc genhtml_branch_coverage=1 00:16:05.807 --rc genhtml_function_coverage=1 00:16:05.807 --rc genhtml_legend=1 00:16:05.807 --rc geninfo_all_blocks=1 00:16:05.807 --rc geninfo_unexecuted_blocks=1 00:16:05.807 00:16:05.807 ' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.807 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:05.808 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:05.808 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:05.808 16:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:12.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:12.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:12.378 Found net devices under 0000:af:00.0: cvl_0_0 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:12.378 Found net devices under 0000:af:00.1: cvl_0_1 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:12.378 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:12.379 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:12.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:16:12.379 00:16:12.379 --- 10.0.0.2 ping statistics --- 00:16:12.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.379 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:16:12.379 00:16:12.379 --- 10.0.0.1 ping statistics --- 00:16:12.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.379 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=931323 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 931323 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 931323 ']' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:12.379 [2024-12-16 16:22:00.139656] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:12.379 [2024-12-16 16:22:00.139701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.379 [2024-12-16 16:22:00.215802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.379 [2024-12-16 16:22:00.238447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.379 [2024-12-16 16:22:00.238481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.379 [2024-12-16 16:22:00.238492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.379 [2024-12-16 16:22:00.238500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.379 [2024-12-16 16:22:00.238508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.379 [2024-12-16 16:22:00.240026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.379 [2024-12-16 16:22:00.240135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.379 [2024-12-16 16:22:00.240183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.379 [2024-12-16 16:22:00.240184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:12.379 "nvmf_tgt_1" 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:12.379 "nvmf_tgt_2" 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:12.379 true 00:16:12.379 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:12.638 true 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:12.638 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.639 rmmod nvme_tcp 00:16:12.639 rmmod nvme_fabrics 00:16:12.639 rmmod nvme_keyring 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 931323 ']' 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 931323 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 931323 ']' 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 931323 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.639 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 931323 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 931323' 00:16:12.898 killing process with pid 931323 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 931323 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 931323 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.898 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:15.431 00:16:15.431 real 0m9.506s 00:16:15.431 user 0m7.089s 00:16:15.431 sys 0m4.888s 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:15.431 ************************************ 00:16:15.431 END TEST nvmf_multitarget 00:16:15.431 ************************************ 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.431 ************************************ 00:16:15.431 START TEST nvmf_rpc 00:16:15.431 ************************************ 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:15.431 * Looking for test storage... 00:16:15.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:15.431 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.432 --rc genhtml_branch_coverage=1 00:16:15.432 --rc genhtml_function_coverage=1 00:16:15.432 --rc genhtml_legend=1 00:16:15.432 --rc geninfo_all_blocks=1 00:16:15.432 --rc geninfo_unexecuted_blocks=1 00:16:15.432 00:16:15.432 ' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:15.432 16:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:22.005 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:22.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:22.005 Found net devices under 0000:af:00.0: cvl_0_0 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:22.005 Found net devices under 0000:af:00.1: cvl_0_1 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:22.005 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:16:22.006 00:16:22.006 --- 10.0.0.2 ping statistics --- 00:16:22.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.006 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:22.006 00:16:22.006 --- 10.0.0.1 ping statistics --- 00:16:22.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.006 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=935045 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 935045 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 935045 ']' 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.006 [2024-12-16 16:22:09.788264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:22.006 [2024-12-16 16:22:09.788312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.006 [2024-12-16 16:22:09.867365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.006 [2024-12-16 16:22:09.890889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.006 [2024-12-16 16:22:09.890927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.006 [2024-12-16 16:22:09.890936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.006 [2024-12-16 16:22:09.890944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.006 [2024-12-16 16:22:09.890949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.006 [2024-12-16 16:22:09.892296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.006 [2024-12-16 16:22:09.892403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.006 [2024-12-16 16:22:09.892515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.006 [2024-12-16 16:22:09.892516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.006 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:22.006 "tick_rate": 2100000000, 00:16:22.006 "poll_groups": [ 00:16:22.006 { 00:16:22.006 "name": "nvmf_tgt_poll_group_000", 00:16:22.006 "admin_qpairs": 0, 00:16:22.006 "io_qpairs": 0, 00:16:22.006 "current_admin_qpairs": 0, 00:16:22.006 "current_io_qpairs": 0, 00:16:22.006 "pending_bdev_io": 0, 00:16:22.006 "completed_nvme_io": 0, 00:16:22.006 "transports": [] 00:16:22.006 }, 00:16:22.006 { 00:16:22.006 "name": "nvmf_tgt_poll_group_001", 00:16:22.006 "admin_qpairs": 0, 00:16:22.006 "io_qpairs": 0, 00:16:22.006 "current_admin_qpairs": 0, 00:16:22.006 "current_io_qpairs": 0, 00:16:22.006 "pending_bdev_io": 0, 00:16:22.006 "completed_nvme_io": 0, 00:16:22.006 "transports": [] 00:16:22.006 }, 00:16:22.006 { 00:16:22.006 "name": "nvmf_tgt_poll_group_002", 00:16:22.006 "admin_qpairs": 0, 00:16:22.006 "io_qpairs": 0, 00:16:22.006 "current_admin_qpairs": 0, 00:16:22.006 "current_io_qpairs": 0, 00:16:22.006 "pending_bdev_io": 0, 00:16:22.006 "completed_nvme_io": 0, 00:16:22.006 "transports": [] 00:16:22.006 }, 00:16:22.006 { 00:16:22.006 "name": "nvmf_tgt_poll_group_003", 00:16:22.006 "admin_qpairs": 0, 00:16:22.006 "io_qpairs": 0, 00:16:22.006 "current_admin_qpairs": 0, 00:16:22.006 "current_io_qpairs": 0, 00:16:22.006 "pending_bdev_io": 0, 00:16:22.006 "completed_nvme_io": 0, 00:16:22.006 "transports": [] 00:16:22.006 } 00:16:22.006 ] 00:16:22.006 }' 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.006 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.006 [2024-12-16 16:22:10.133540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:22.007 "tick_rate": 2100000000, 00:16:22.007 "poll_groups": [ 00:16:22.007 { 00:16:22.007 "name": "nvmf_tgt_poll_group_000", 00:16:22.007 "admin_qpairs": 0, 00:16:22.007 "io_qpairs": 0, 00:16:22.007 "current_admin_qpairs": 0, 00:16:22.007 "current_io_qpairs": 0, 00:16:22.007 "pending_bdev_io": 0, 00:16:22.007 "completed_nvme_io": 0, 00:16:22.007 "transports": [ 00:16:22.007 { 00:16:22.007 "trtype": "TCP" 00:16:22.007 } 00:16:22.007 ] 00:16:22.007 }, 00:16:22.007 { 00:16:22.007 "name": "nvmf_tgt_poll_group_001", 00:16:22.007 "admin_qpairs": 0, 00:16:22.007 "io_qpairs": 0, 00:16:22.007 "current_admin_qpairs": 0, 00:16:22.007 "current_io_qpairs": 0, 00:16:22.007 "pending_bdev_io": 0, 00:16:22.007 "completed_nvme_io": 0, 00:16:22.007 "transports": [ 00:16:22.007 { 00:16:22.007 "trtype": "TCP" 00:16:22.007 } 00:16:22.007 ] 00:16:22.007 }, 00:16:22.007 { 00:16:22.007 "name": "nvmf_tgt_poll_group_002", 00:16:22.007 "admin_qpairs": 0, 00:16:22.007 "io_qpairs": 0, 00:16:22.007 "current_admin_qpairs": 0, 00:16:22.007 "current_io_qpairs": 0, 00:16:22.007 "pending_bdev_io": 0, 00:16:22.007 "completed_nvme_io": 0, 00:16:22.007 "transports": [ 00:16:22.007 { 00:16:22.007 "trtype": "TCP" 00:16:22.007 } 00:16:22.007 ] 00:16:22.007 }, 00:16:22.007 { 00:16:22.007 "name": "nvmf_tgt_poll_group_003", 00:16:22.007 "admin_qpairs": 0, 00:16:22.007 "io_qpairs": 0, 00:16:22.007 "current_admin_qpairs": 0, 00:16:22.007 "current_io_qpairs": 0, 00:16:22.007 "pending_bdev_io": 0, 00:16:22.007 "completed_nvme_io": 0, 00:16:22.007 "transports": [ 00:16:22.007 { 00:16:22.007 "trtype": "TCP" 00:16:22.007 } 00:16:22.007 ] 00:16:22.007 } 00:16:22.007 ] 00:16:22.007 }' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 Malloc1 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 [2024-12-16 16:22:10.317143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:22.007 [2024-12-16 16:22:10.345692] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:22.007 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:22.007 could not add new controller: failed to write to nvme-fabrics device 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.007 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.950 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.950 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.950 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.950 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.950 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:25.482 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:25.482 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:25.482 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.482 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:25.482 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.482 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.483 [2024-12-16 16:22:13.699251] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:25.483 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:25.483 could not add new controller: failed to write to nvme-fabrics device 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.483 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.419 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.419 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:26.419 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.419 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:26.419 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:28.322 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.581 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.581 [2024-12-16 16:22:17.022156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.581 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:29.957 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.957 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:29.957 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.957 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:29.957 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 [2024-12-16 16:22:20.391222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.860 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.860 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.860 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.860 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.860 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.236 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.236 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:33.236 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.236 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:33.236 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:35.140 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.399 [2024-12-16 16:22:23.843102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.399 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.775 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.775 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:36.775 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.775 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:36.775 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:38.825 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.825 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.826 [2024-12-16 16:22:27.111478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.826 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.761 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.762 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:39.762 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.762 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:39.762 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:41.662 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 [2024-12-16 16:22:30.417560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.297 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.297 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.297 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.297 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.297 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 [2024-12-16 16:22:33.735845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.198 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.199 [2024-12-16 16:22:33.783953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.199 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 [2024-12-16 16:22:33.832090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 [2024-12-16 16:22:33.880266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 [2024-12-16 16:22:33.928440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.458 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:45.459 "tick_rate": 2100000000, 00:16:45.459 "poll_groups": [ 00:16:45.459 { 00:16:45.459 "name": "nvmf_tgt_poll_group_000", 00:16:45.459 "admin_qpairs": 2, 00:16:45.459 "io_qpairs": 168, 00:16:45.459 "current_admin_qpairs": 0, 00:16:45.459 "current_io_qpairs": 0, 00:16:45.459 "pending_bdev_io": 0, 00:16:45.459 "completed_nvme_io": 267, 00:16:45.459 "transports": [ 00:16:45.459 { 00:16:45.459 "trtype": "TCP" 00:16:45.459 } 00:16:45.459 ] 00:16:45.459 }, 00:16:45.459 { 00:16:45.459 "name": "nvmf_tgt_poll_group_001", 00:16:45.459 "admin_qpairs": 2, 00:16:45.459 "io_qpairs": 168, 00:16:45.459 "current_admin_qpairs": 0, 00:16:45.459 "current_io_qpairs": 0, 00:16:45.459 "pending_bdev_io": 0, 00:16:45.459 "completed_nvme_io": 219, 00:16:45.459 "transports": [ 00:16:45.459 { 00:16:45.459 "trtype": "TCP" 00:16:45.459 } 00:16:45.459 ] 00:16:45.459 }, 00:16:45.459 { 00:16:45.459 "name": "nvmf_tgt_poll_group_002", 00:16:45.459 "admin_qpairs": 1, 00:16:45.459 "io_qpairs": 168, 00:16:45.459 "current_admin_qpairs": 0, 00:16:45.459 "current_io_qpairs": 0, 00:16:45.459 "pending_bdev_io": 0, 00:16:45.459 "completed_nvme_io": 269, 00:16:45.459 "transports": [ 00:16:45.459 { 00:16:45.459 "trtype": "TCP" 00:16:45.459 } 00:16:45.459 ] 00:16:45.459 }, 00:16:45.459 { 00:16:45.459 "name": "nvmf_tgt_poll_group_003", 00:16:45.459 "admin_qpairs": 2, 00:16:45.459 "io_qpairs": 168, 00:16:45.459 "current_admin_qpairs": 0, 00:16:45.459 "current_io_qpairs": 0, 00:16:45.459 "pending_bdev_io": 0, 00:16:45.459 "completed_nvme_io": 267, 00:16:45.459 "transports": [ 00:16:45.459 { 00:16:45.459 "trtype": "TCP" 00:16:45.459 } 00:16:45.459 ] 00:16:45.459 } 00:16:45.459 ] 00:16:45.459 }' 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:45.459 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.459 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:45.718 rmmod nvme_tcp 00:16:45.718 rmmod nvme_fabrics 00:16:45.718 rmmod nvme_keyring 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 935045 ']' 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 935045 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 935045 ']' 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 935045 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935045 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935045' 00:16:45.718 killing process with pid 935045 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 935045 00:16:45.718 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 935045 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.976 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.877 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:47.877 00:16:47.877 real 0m32.877s 00:16:47.877 user 1m39.034s 00:16:47.877 sys 0m6.497s 00:16:47.877 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.877 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.877 ************************************ 00:16:47.877 END TEST nvmf_rpc 00:16:47.877 ************************************ 00:16:47.878 16:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:47.878 16:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.878 16:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.878 16:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:48.137 ************************************ 00:16:48.137 START TEST nvmf_invalid 00:16:48.137 ************************************ 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:48.137 * Looking for test storage... 00:16:48.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:48.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.137 --rc genhtml_branch_coverage=1 00:16:48.137 --rc genhtml_function_coverage=1 00:16:48.137 --rc genhtml_legend=1 00:16:48.137 --rc geninfo_all_blocks=1 00:16:48.137 --rc geninfo_unexecuted_blocks=1 00:16:48.137 00:16:48.137 ' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:48.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.137 --rc genhtml_branch_coverage=1 00:16:48.137 --rc genhtml_function_coverage=1 00:16:48.137 --rc genhtml_legend=1 00:16:48.137 --rc geninfo_all_blocks=1 00:16:48.137 --rc geninfo_unexecuted_blocks=1 00:16:48.137 00:16:48.137 ' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:48.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.137 --rc genhtml_branch_coverage=1 00:16:48.137 --rc genhtml_function_coverage=1 00:16:48.137 --rc genhtml_legend=1 00:16:48.137 --rc geninfo_all_blocks=1 00:16:48.137 --rc geninfo_unexecuted_blocks=1 00:16:48.137 00:16:48.137 ' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:48.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.137 --rc genhtml_branch_coverage=1 00:16:48.137 --rc genhtml_function_coverage=1 00:16:48.137 --rc genhtml_legend=1 00:16:48.137 --rc geninfo_all_blocks=1 00:16:48.137 --rc geninfo_unexecuted_blocks=1 00:16:48.137 00:16:48.137 ' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.137 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:48.138 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.707 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:54.708 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:54.708 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:54.708 Found net devices under 0000:af:00.0: cvl_0_0 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:54.708 Found net devices under 0000:af:00.1: cvl_0_1 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:54.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:16:54.708 00:16:54.708 --- 10.0.0.2 ping statistics --- 00:16:54.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.708 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:16:54.708 00:16:54.708 --- 10.0.0.1 ping statistics --- 00:16:54.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.708 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=942698 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 942698 00:16:54.708 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 942698 ']' 00:16:54.709 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.709 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.709 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.709 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.709 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.709 [2024-12-16 16:22:42.803197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:54.709 [2024-12-16 16:22:42.803239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.709 [2024-12-16 16:22:42.885411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.709 [2024-12-16 16:22:42.908467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.709 [2024-12-16 16:22:42.908501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.709 [2024-12-16 16:22:42.908511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.709 [2024-12-16 16:22:42.908518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.709 [2024-12-16 16:22:42.908528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.709 [2024-12-16 16:22:42.910034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.709 [2024-12-16 16:22:42.910052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.709 [2024-12-16 16:22:42.910146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.709 [2024-12-16 16:22:42.910147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31892 00:16:54.709 [2024-12-16 16:22:43.215555] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:54.709 { 00:16:54.709 "nqn": "nqn.2016-06.io.spdk:cnode31892", 00:16:54.709 "tgt_name": "foobar", 00:16:54.709 "method": "nvmf_create_subsystem", 00:16:54.709 "req_id": 1 00:16:54.709 } 00:16:54.709 Got JSON-RPC error response 00:16:54.709 response: 00:16:54.709 { 00:16:54.709 "code": -32603, 00:16:54.709 "message": "Unable to find target foobar" 00:16:54.709 }' 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:54.709 { 00:16:54.709 "nqn": "nqn.2016-06.io.spdk:cnode31892", 00:16:54.709 "tgt_name": "foobar", 00:16:54.709 "method": "nvmf_create_subsystem", 00:16:54.709 "req_id": 1 00:16:54.709 } 00:16:54.709 Got JSON-RPC error response 00:16:54.709 response: 00:16:54.709 { 00:16:54.709 "code": -32603, 00:16:54.709 "message": "Unable to find target foobar" 00:16:54.709 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:54.709 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22778 00:16:54.968 [2024-12-16 16:22:43.420245] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22778: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:54.968 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:54.968 { 00:16:54.968 "nqn": "nqn.2016-06.io.spdk:cnode22778", 00:16:54.968 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:54.968 "method": "nvmf_create_subsystem", 00:16:54.968 "req_id": 1 00:16:54.968 } 00:16:54.968 Got JSON-RPC error response 00:16:54.968 response: 00:16:54.968 { 00:16:54.968 "code": -32602, 00:16:54.968 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:54.968 }' 00:16:54.968 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:54.968 { 00:16:54.968 "nqn": "nqn.2016-06.io.spdk:cnode22778", 00:16:54.968 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:54.968 "method": "nvmf_create_subsystem", 00:16:54.968 "req_id": 1 00:16:54.968 } 00:16:54.968 Got JSON-RPC error response 00:16:54.968 response: 00:16:54.968 { 00:16:54.968 "code": -32602, 00:16:54.968 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:54.968 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:54.968 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:54.968 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26076 00:16:55.227 [2024-12-16 16:22:43.640968] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26076: invalid model number 'SPDK_Controller' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:55.227 { 00:16:55.227 "nqn": "nqn.2016-06.io.spdk:cnode26076", 00:16:55.227 "model_number": "SPDK_Controller\u001f", 00:16:55.227 "method": "nvmf_create_subsystem", 00:16:55.227 "req_id": 1 00:16:55.227 } 00:16:55.227 Got JSON-RPC error response 00:16:55.227 response: 00:16:55.227 { 00:16:55.227 "code": -32602, 00:16:55.227 "message": "Invalid MN SPDK_Controller\u001f" 00:16:55.227 }' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:55.227 { 00:16:55.227 "nqn": "nqn.2016-06.io.spdk:cnode26076", 00:16:55.227 "model_number": "SPDK_Controller\u001f", 00:16:55.227 "method": "nvmf_create_subsystem", 00:16:55.227 "req_id": 1 00:16:55.227 } 00:16:55.227 Got JSON-RPC error response 00:16:55.227 response: 00:16:55.227 { 00:16:55.227 "code": -32602, 00:16:55.227 "message": "Invalid MN SPDK_Controller\u001f" 00:16:55.227 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:55.227 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W5lrJIC+h#tL~((9:>Z!$' 00:16:55.228 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'W5lrJIC+h#tL~((9:>Z!$' nqn.2016-06.io.spdk:cnode26349 00:16:55.487 [2024-12-16 16:22:43.982102] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26349: invalid serial number 'W5lrJIC+h#tL~((9:>Z!$' 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:55.487 { 00:16:55.487 "nqn": "nqn.2016-06.io.spdk:cnode26349", 00:16:55.487 "serial_number": "W5lrJIC+h#tL~((9:>Z!$", 00:16:55.487 "method": "nvmf_create_subsystem", 00:16:55.487 "req_id": 1 00:16:55.487 } 00:16:55.487 Got JSON-RPC error response 00:16:55.487 response: 00:16:55.487 { 00:16:55.487 "code": -32602, 00:16:55.487 "message": "Invalid SN W5lrJIC+h#tL~((9:>Z!$" 00:16:55.487 }' 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:55.487 { 00:16:55.487 "nqn": "nqn.2016-06.io.spdk:cnode26349", 00:16:55.487 "serial_number": "W5lrJIC+h#tL~((9:>Z!$", 00:16:55.487 "method": "nvmf_create_subsystem", 00:16:55.487 "req_id": 1 00:16:55.487 } 00:16:55.487 Got JSON-RPC error response 00:16:55.487 response: 00:16:55.487 { 00:16:55.487 "code": -32602, 00:16:55.487 "message": "Invalid SN W5lrJIC+h#tL~((9:>Z!$" 00:16:55.487 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.487 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.488 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:55.747 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>u' 00:16:55.748 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>u' nqn.2016-06.io.spdk:cnode8685 00:16:56.007 [2024-12-16 16:22:44.463688] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8685: invalid model number 'QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>u' 00:16:56.007 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:56.007 { 00:16:56.007 "nqn": "nqn.2016-06.io.spdk:cnode8685", 00:16:56.007 "model_number": "QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>\u007fu", 00:16:56.007 "method": "nvmf_create_subsystem", 00:16:56.007 "req_id": 1 00:16:56.007 } 00:16:56.007 Got JSON-RPC error response 00:16:56.007 response: 00:16:56.007 { 00:16:56.007 "code": -32602, 00:16:56.007 "message": "Invalid MN QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>\u007fu" 00:16:56.007 }' 00:16:56.007 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:56.007 { 00:16:56.007 "nqn": "nqn.2016-06.io.spdk:cnode8685", 00:16:56.007 "model_number": "QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>\u007fu", 00:16:56.007 "method": "nvmf_create_subsystem", 00:16:56.007 "req_id": 1 00:16:56.007 } 00:16:56.007 Got JSON-RPC error response 00:16:56.007 response: 00:16:56.007 { 00:16:56.007 "code": -32602, 00:16:56.007 "message": "Invalid MN QO`i!/=`wJI.9{s-j^ ;R+WAyVs~.t}~%mHpzc>\u007fu" 00:16:56.007 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:56.007 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:56.266 [2024-12-16 16:22:44.660409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.266 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:56.525 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:56.525 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:56.525 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:56.525 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:56.525 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:56.525 [2024-12-16 16:22:45.083048] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:56.525 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:56.525 { 00:16:56.525 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:56.525 "listen_address": { 00:16:56.525 "trtype": "tcp", 00:16:56.525 "traddr": "", 00:16:56.525 "trsvcid": "4421" 00:16:56.525 }, 00:16:56.525 "method": "nvmf_subsystem_remove_listener", 00:16:56.525 "req_id": 1 00:16:56.525 } 00:16:56.525 Got JSON-RPC error response 00:16:56.525 response: 00:16:56.525 { 00:16:56.525 "code": -32602, 00:16:56.525 "message": "Invalid parameters" 00:16:56.525 }' 00:16:56.525 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:56.525 { 00:16:56.525 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:56.525 "listen_address": { 00:16:56.525 "trtype": "tcp", 00:16:56.525 "traddr": "", 00:16:56.525 "trsvcid": "4421" 00:16:56.525 }, 00:16:56.525 "method": "nvmf_subsystem_remove_listener", 00:16:56.525 "req_id": 1 00:16:56.525 } 00:16:56.525 Got JSON-RPC error response 00:16:56.525 response: 00:16:56.525 { 00:16:56.525 "code": -32602, 00:16:56.525 "message": "Invalid parameters" 00:16:56.525 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:56.525 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9725 -i 0 00:16:56.782 [2024-12-16 16:22:45.291689] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9725: invalid cntlid range [0-65519] 00:16:56.782 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:56.782 { 00:16:56.782 "nqn": "nqn.2016-06.io.spdk:cnode9725", 00:16:56.782 "min_cntlid": 0, 00:16:56.782 "method": "nvmf_create_subsystem", 00:16:56.782 "req_id": 1 00:16:56.782 } 00:16:56.782 Got JSON-RPC error response 00:16:56.782 response: 00:16:56.782 { 00:16:56.782 "code": -32602, 00:16:56.782 "message": "Invalid cntlid range [0-65519]" 00:16:56.782 }' 00:16:56.782 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:56.782 { 00:16:56.782 "nqn": "nqn.2016-06.io.spdk:cnode9725", 00:16:56.782 "min_cntlid": 0, 00:16:56.782 "method": "nvmf_create_subsystem", 00:16:56.782 "req_id": 1 00:16:56.782 } 00:16:56.782 Got JSON-RPC error response 00:16:56.782 response: 00:16:56.782 { 00:16:56.782 "code": -32602, 00:16:56.782 "message": "Invalid cntlid range [0-65519]" 00:16:56.782 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:56.782 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11479 -i 65520 00:16:57.040 [2024-12-16 16:22:45.496347] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11479: invalid cntlid range [65520-65519] 00:16:57.040 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:57.040 { 00:16:57.040 "nqn": "nqn.2016-06.io.spdk:cnode11479", 00:16:57.040 "min_cntlid": 65520, 00:16:57.040 "method": "nvmf_create_subsystem", 00:16:57.040 "req_id": 1 00:16:57.040 } 00:16:57.040 Got JSON-RPC error response 00:16:57.040 response: 00:16:57.040 { 00:16:57.040 "code": -32602, 00:16:57.040 "message": "Invalid cntlid range [65520-65519]" 00:16:57.040 }' 00:16:57.040 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:57.040 { 00:16:57.040 "nqn": "nqn.2016-06.io.spdk:cnode11479", 00:16:57.040 "min_cntlid": 65520, 00:16:57.040 "method": "nvmf_create_subsystem", 00:16:57.040 "req_id": 1 00:16:57.040 } 00:16:57.040 Got JSON-RPC error response 00:16:57.040 response: 00:16:57.040 { 00:16:57.040 "code": -32602, 00:16:57.040 "message": "Invalid cntlid range [65520-65519]" 00:16:57.040 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.040 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30840 -I 0 00:16:57.299 [2024-12-16 16:22:45.688991] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30840: invalid cntlid range [1-0] 00:16:57.299 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:57.299 { 00:16:57.299 "nqn": "nqn.2016-06.io.spdk:cnode30840", 00:16:57.299 "max_cntlid": 0, 00:16:57.299 "method": "nvmf_create_subsystem", 00:16:57.299 "req_id": 1 00:16:57.299 } 00:16:57.299 Got JSON-RPC error response 00:16:57.299 response: 00:16:57.299 { 00:16:57.299 "code": -32602, 00:16:57.299 "message": "Invalid cntlid range [1-0]" 00:16:57.299 }' 00:16:57.299 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:57.299 { 00:16:57.299 "nqn": "nqn.2016-06.io.spdk:cnode30840", 00:16:57.299 "max_cntlid": 0, 00:16:57.299 "method": "nvmf_create_subsystem", 00:16:57.299 "req_id": 1 00:16:57.299 } 00:16:57.299 Got JSON-RPC error response 00:16:57.299 response: 00:16:57.299 { 00:16:57.299 "code": -32602, 00:16:57.299 "message": "Invalid cntlid range [1-0]" 00:16:57.299 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.299 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16030 -I 65520 00:16:57.299 [2024-12-16 16:22:45.885660] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16030: invalid cntlid range [1-65520] 00:16:57.557 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:57.557 { 00:16:57.557 "nqn": "nqn.2016-06.io.spdk:cnode16030", 00:16:57.557 "max_cntlid": 65520, 00:16:57.557 "method": "nvmf_create_subsystem", 00:16:57.557 "req_id": 1 00:16:57.557 } 00:16:57.557 Got JSON-RPC error response 00:16:57.557 response: 00:16:57.557 { 00:16:57.557 "code": -32602, 00:16:57.557 "message": "Invalid cntlid range [1-65520]" 00:16:57.557 }' 00:16:57.557 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:57.557 { 00:16:57.557 "nqn": "nqn.2016-06.io.spdk:cnode16030", 00:16:57.557 "max_cntlid": 65520, 00:16:57.557 "method": "nvmf_create_subsystem", 00:16:57.557 "req_id": 1 00:16:57.557 } 00:16:57.557 Got JSON-RPC error response 00:16:57.557 response: 00:16:57.557 { 00:16:57.557 "code": -32602, 00:16:57.557 "message": "Invalid cntlid range [1-65520]" 00:16:57.557 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.557 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22503 -i 6 -I 5 00:16:57.557 [2024-12-16 16:22:46.086354] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22503: invalid cntlid range [6-5] 00:16:57.557 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:57.557 { 00:16:57.557 "nqn": "nqn.2016-06.io.spdk:cnode22503", 00:16:57.557 "min_cntlid": 6, 00:16:57.557 "max_cntlid": 5, 00:16:57.557 "method": "nvmf_create_subsystem", 00:16:57.557 "req_id": 1 00:16:57.557 } 00:16:57.557 Got JSON-RPC error response 00:16:57.557 response: 00:16:57.557 { 00:16:57.557 "code": -32602, 00:16:57.557 "message": "Invalid cntlid range [6-5]" 00:16:57.557 }' 00:16:57.557 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:57.557 { 00:16:57.557 "nqn": "nqn.2016-06.io.spdk:cnode22503", 00:16:57.557 "min_cntlid": 6, 00:16:57.557 "max_cntlid": 5, 00:16:57.557 "method": "nvmf_create_subsystem", 00:16:57.557 "req_id": 1 00:16:57.557 } 00:16:57.557 Got JSON-RPC error response 00:16:57.557 response: 00:16:57.557 { 00:16:57.557 "code": -32602, 00:16:57.557 "message": "Invalid cntlid range [6-5]" 00:16:57.557 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.557 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:57.817 { 00:16:57.817 "name": "foobar", 00:16:57.817 "method": "nvmf_delete_target", 00:16:57.817 "req_id": 1 00:16:57.817 } 00:16:57.817 Got JSON-RPC error response 00:16:57.817 response: 00:16:57.817 { 00:16:57.817 "code": -32602, 00:16:57.817 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:57.817 }' 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:57.817 { 00:16:57.817 "name": "foobar", 00:16:57.817 "method": "nvmf_delete_target", 00:16:57.817 "req_id": 1 00:16:57.817 } 00:16:57.817 Got JSON-RPC error response 00:16:57.817 response: 00:16:57.817 { 00:16:57.817 "code": -32602, 00:16:57.817 "message": "The specified target doesn't exist, cannot delete it." 00:16:57.817 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.817 rmmod nvme_tcp 00:16:57.817 rmmod nvme_fabrics 00:16:57.817 rmmod nvme_keyring 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 942698 ']' 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 942698 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 942698 ']' 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 942698 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942698 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942698' 00:16:57.817 killing process with pid 942698 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 942698 00:16:57.817 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 942698 00:16:58.076 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.076 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:58.076 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:58.076 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:58.076 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:58.076 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:58.077 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:58.077 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.077 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:58.077 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.077 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.077 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.981 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.981 00:16:59.981 real 0m12.079s 00:16:59.981 user 0m18.633s 00:16:59.981 sys 0m5.371s 00:16:59.981 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.981 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.981 ************************************ 00:16:59.981 END TEST nvmf_invalid 00:16:59.981 ************************************ 00:17:00.240 16:22:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:00.240 16:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.240 16:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.240 16:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.240 ************************************ 00:17:00.241 START TEST nvmf_connect_stress 00:17:00.241 ************************************ 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:00.241 * Looking for test storage... 00:17:00.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:00.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.241 --rc genhtml_branch_coverage=1 00:17:00.241 --rc genhtml_function_coverage=1 00:17:00.241 --rc genhtml_legend=1 00:17:00.241 --rc geninfo_all_blocks=1 00:17:00.241 --rc geninfo_unexecuted_blocks=1 00:17:00.241 00:17:00.241 ' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:00.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.241 --rc genhtml_branch_coverage=1 00:17:00.241 --rc genhtml_function_coverage=1 00:17:00.241 --rc genhtml_legend=1 00:17:00.241 --rc geninfo_all_blocks=1 00:17:00.241 --rc geninfo_unexecuted_blocks=1 00:17:00.241 00:17:00.241 ' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:00.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.241 --rc genhtml_branch_coverage=1 00:17:00.241 --rc genhtml_function_coverage=1 00:17:00.241 --rc genhtml_legend=1 00:17:00.241 --rc geninfo_all_blocks=1 00:17:00.241 --rc geninfo_unexecuted_blocks=1 00:17:00.241 00:17:00.241 ' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:00.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.241 --rc genhtml_branch_coverage=1 00:17:00.241 --rc genhtml_function_coverage=1 00:17:00.241 --rc genhtml_legend=1 00:17:00.241 --rc geninfo_all_blocks=1 00:17:00.241 --rc geninfo_unexecuted_blocks=1 00:17:00.241 00:17:00.241 ' 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.241 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.500 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.500 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.500 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.500 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.501 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:07.069 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:07.069 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:07.069 Found net devices under 0000:af:00.0: cvl_0_0 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:07.069 Found net devices under 0000:af:00.1: cvl_0_1 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:07.069 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:07.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:17:07.070 00:17:07.070 --- 10.0.0.2 ping statistics --- 00:17:07.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.070 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:07.070 00:17:07.070 --- 10.0.0.1 ping statistics --- 00:17:07.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.070 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=946907 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 946907 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 946907 ']' 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.070 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 [2024-12-16 16:22:54.878108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:07.070 [2024-12-16 16:22:54.878160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.070 [2024-12-16 16:22:54.956601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.070 [2024-12-16 16:22:54.979163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.070 [2024-12-16 16:22:54.979201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.070 [2024-12-16 16:22:54.979208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.070 [2024-12-16 16:22:54.979214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.070 [2024-12-16 16:22:54.979218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.070 [2024-12-16 16:22:54.980562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.070 [2024-12-16 16:22:54.980671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.070 [2024-12-16 16:22:54.980672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 [2024-12-16 16:22:55.120098] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 [2024-12-16 16:22:55.140316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 NULL1 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=947024 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.070 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.071 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.329 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.329 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:07.329 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.329 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.329 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.896 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.896 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:07.896 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.896 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.896 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.154 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.154 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:08.154 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.154 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.154 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.413 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.413 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:08.413 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.413 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.413 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.671 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.671 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:08.671 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.671 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.671 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.930 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.930 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:08.930 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.930 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.930 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.498 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.498 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:09.498 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.498 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.498 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.757 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.757 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:09.757 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.757 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.757 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.016 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.016 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:10.016 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.016 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.016 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.275 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.275 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:10.275 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.275 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.275 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.842 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.842 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:10.842 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.842 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.842 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.101 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.101 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:11.101 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.101 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.101 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.359 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.359 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:11.359 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.359 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.359 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.618 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.618 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:11.618 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.618 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.618 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.876 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.876 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:11.876 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.876 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.876 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.443 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.443 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:12.443 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.443 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.443 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.702 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.702 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:12.702 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.702 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.702 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.960 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.960 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:12.960 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.960 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.960 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.219 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.219 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:13.219 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.219 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.219 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.477 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.477 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:13.477 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.477 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.477 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.044 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.044 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:14.044 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.044 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.044 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.303 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.303 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:14.303 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.303 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.303 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.561 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.561 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:14.561 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.561 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.561 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.820 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.820 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:14.820 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.820 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.820 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.387 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.387 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:15.387 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.387 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.387 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.645 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.645 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:15.645 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.645 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.645 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.903 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.903 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:15.903 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.903 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.903 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:16.162 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.162 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.421 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.421 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:16.421 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.421 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.421 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947024 00:17:16.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (947024) - No such process 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 947024 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.986 rmmod nvme_tcp 00:17:16.986 rmmod nvme_fabrics 00:17:16.986 rmmod nvme_keyring 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 946907 ']' 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 946907 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 946907 ']' 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 946907 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946907 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946907' 00:17:16.986 killing process with pid 946907 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 946907 00:17:16.986 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 946907 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.245 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.150 00:17:19.150 real 0m19.043s 00:17:19.150 user 0m39.617s 00:17:19.150 sys 0m8.438s 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.150 ************************************ 00:17:19.150 END TEST nvmf_connect_stress 00:17:19.150 ************************************ 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.150 16:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.410 ************************************ 00:17:19.410 START TEST nvmf_fused_ordering 00:17:19.410 ************************************ 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:19.410 * Looking for test storage... 00:17:19.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.410 --rc genhtml_branch_coverage=1 00:17:19.410 --rc genhtml_function_coverage=1 00:17:19.410 --rc genhtml_legend=1 00:17:19.410 --rc geninfo_all_blocks=1 00:17:19.410 --rc geninfo_unexecuted_blocks=1 00:17:19.410 00:17:19.410 ' 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.410 --rc genhtml_branch_coverage=1 00:17:19.410 --rc genhtml_function_coverage=1 00:17:19.410 --rc genhtml_legend=1 00:17:19.410 --rc geninfo_all_blocks=1 00:17:19.410 --rc geninfo_unexecuted_blocks=1 00:17:19.410 00:17:19.410 ' 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.410 --rc genhtml_branch_coverage=1 00:17:19.410 --rc genhtml_function_coverage=1 00:17:19.410 --rc genhtml_legend=1 00:17:19.410 --rc geninfo_all_blocks=1 00:17:19.410 --rc geninfo_unexecuted_blocks=1 00:17:19.410 00:17:19.410 ' 00:17:19.410 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.411 --rc genhtml_branch_coverage=1 00:17:19.411 --rc genhtml_function_coverage=1 00:17:19.411 --rc genhtml_legend=1 00:17:19.411 --rc geninfo_all_blocks=1 00:17:19.411 --rc geninfo_unexecuted_blocks=1 00:17:19.411 00:17:19.411 ' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.411 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:25.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:25.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:25.979 Found net devices under 0000:af:00.0: cvl_0_0 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:25.979 Found net devices under 0000:af:00.1: cvl_0_1 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.979 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:25.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:17:25.980 00:17:25.980 --- 10.0.0.2 ping statistics --- 00:17:25.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.980 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:17:25.980 00:17:25.980 --- 10.0.0.1 ping statistics --- 00:17:25.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.980 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=952071 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 952071 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 952071 ']' 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.980 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 [2024-12-16 16:23:13.940120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:25.980 [2024-12-16 16:23:13.940165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.980 [2024-12-16 16:23:14.017298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.980 [2024-12-16 16:23:14.037909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.980 [2024-12-16 16:23:14.037945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.980 [2024-12-16 16:23:14.037955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.980 [2024-12-16 16:23:14.037961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.980 [2024-12-16 16:23:14.037966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.980 [2024-12-16 16:23:14.038473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 [2024-12-16 16:23:14.180362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 [2024-12-16 16:23:14.200556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 NULL1 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:25.980 [2024-12-16 16:23:14.259733] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:25.980 [2024-12-16 16:23:14.259777] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952148 ] 00:17:26.239 Attached to nqn.2016-06.io.spdk:cnode1 00:17:26.239 Namespace ID: 1 size: 1GB 00:17:26.239 fused_ordering(0) 00:17:26.239 fused_ordering(1) 00:17:26.239 fused_ordering(2) 00:17:26.239 fused_ordering(3) 00:17:26.239 fused_ordering(4) 00:17:26.239 fused_ordering(5) 00:17:26.239 fused_ordering(6) 00:17:26.239 fused_ordering(7) 00:17:26.239 fused_ordering(8) 00:17:26.239 fused_ordering(9) 00:17:26.239 fused_ordering(10) 00:17:26.239 fused_ordering(11) 00:17:26.239 fused_ordering(12) 00:17:26.239 fused_ordering(13) 00:17:26.239 fused_ordering(14) 00:17:26.239 fused_ordering(15) 00:17:26.239 fused_ordering(16) 00:17:26.239 fused_ordering(17) 00:17:26.239 fused_ordering(18) 00:17:26.239 fused_ordering(19) 00:17:26.239 fused_ordering(20) 00:17:26.239 fused_ordering(21) 00:17:26.239 fused_ordering(22) 00:17:26.239 fused_ordering(23) 00:17:26.239 fused_ordering(24) 00:17:26.239 fused_ordering(25) 00:17:26.239 fused_ordering(26) 00:17:26.239 fused_ordering(27) 00:17:26.239 fused_ordering(28) 00:17:26.239 fused_ordering(29) 00:17:26.239 fused_ordering(30) 00:17:26.239 fused_ordering(31) 00:17:26.239 fused_ordering(32) 00:17:26.239 fused_ordering(33) 00:17:26.239 fused_ordering(34) 00:17:26.239 fused_ordering(35) 00:17:26.239 fused_ordering(36) 00:17:26.239 fused_ordering(37) 00:17:26.239 fused_ordering(38) 00:17:26.239 fused_ordering(39) 00:17:26.239 fused_ordering(40) 00:17:26.239 fused_ordering(41) 00:17:26.239 fused_ordering(42) 00:17:26.239 fused_ordering(43) 00:17:26.239 fused_ordering(44) 00:17:26.239 fused_ordering(45) 00:17:26.239 fused_ordering(46) 00:17:26.239 fused_ordering(47) 00:17:26.239 fused_ordering(48) 00:17:26.239 fused_ordering(49) 00:17:26.239 fused_ordering(50) 00:17:26.239 fused_ordering(51) 00:17:26.239 fused_ordering(52) 00:17:26.239 fused_ordering(53) 00:17:26.239 fused_ordering(54) 00:17:26.239 fused_ordering(55) 00:17:26.239 fused_ordering(56) 00:17:26.239 fused_ordering(57) 00:17:26.239 fused_ordering(58) 00:17:26.239 fused_ordering(59) 00:17:26.239 fused_ordering(60) 00:17:26.239 fused_ordering(61) 00:17:26.239 fused_ordering(62) 00:17:26.239 fused_ordering(63) 00:17:26.239 fused_ordering(64) 00:17:26.239 fused_ordering(65) 00:17:26.239 fused_ordering(66) 00:17:26.239 fused_ordering(67) 00:17:26.239 fused_ordering(68) 00:17:26.239 fused_ordering(69) 00:17:26.239 fused_ordering(70) 00:17:26.239 fused_ordering(71) 00:17:26.239 fused_ordering(72) 00:17:26.239 fused_ordering(73) 00:17:26.239 fused_ordering(74) 00:17:26.240 fused_ordering(75) 00:17:26.240 fused_ordering(76) 00:17:26.240 fused_ordering(77) 00:17:26.240 fused_ordering(78) 00:17:26.240 fused_ordering(79) 00:17:26.240 fused_ordering(80) 00:17:26.240 fused_ordering(81) 00:17:26.240 fused_ordering(82) 00:17:26.240 fused_ordering(83) 00:17:26.240 fused_ordering(84) 00:17:26.240 fused_ordering(85) 00:17:26.240 fused_ordering(86) 00:17:26.240 fused_ordering(87) 00:17:26.240 fused_ordering(88) 00:17:26.240 fused_ordering(89) 00:17:26.240 fused_ordering(90) 00:17:26.240 fused_ordering(91) 00:17:26.240 fused_ordering(92) 00:17:26.240 fused_ordering(93) 00:17:26.240 fused_ordering(94) 00:17:26.240 fused_ordering(95) 00:17:26.240 fused_ordering(96) 00:17:26.240 fused_ordering(97) 00:17:26.240 fused_ordering(98) 00:17:26.240 fused_ordering(99) 00:17:26.240 fused_ordering(100) 00:17:26.240 fused_ordering(101) 00:17:26.240 fused_ordering(102) 00:17:26.240 fused_ordering(103) 00:17:26.240 fused_ordering(104) 00:17:26.240 fused_ordering(105) 00:17:26.240 fused_ordering(106) 00:17:26.240 fused_ordering(107) 00:17:26.240 fused_ordering(108) 00:17:26.240 fused_ordering(109) 00:17:26.240 fused_ordering(110) 00:17:26.240 fused_ordering(111) 00:17:26.240 fused_ordering(112) 00:17:26.240 fused_ordering(113) 00:17:26.240 fused_ordering(114) 00:17:26.240 fused_ordering(115) 00:17:26.240 fused_ordering(116) 00:17:26.240 fused_ordering(117) 00:17:26.240 fused_ordering(118) 00:17:26.240 fused_ordering(119) 00:17:26.240 fused_ordering(120) 00:17:26.240 fused_ordering(121) 00:17:26.240 fused_ordering(122) 00:17:26.240 fused_ordering(123) 00:17:26.240 fused_ordering(124) 00:17:26.240 fused_ordering(125) 00:17:26.240 fused_ordering(126) 00:17:26.240 fused_ordering(127) 00:17:26.240 fused_ordering(128) 00:17:26.240 fused_ordering(129) 00:17:26.240 fused_ordering(130) 00:17:26.240 fused_ordering(131) 00:17:26.240 fused_ordering(132) 00:17:26.240 fused_ordering(133) 00:17:26.240 fused_ordering(134) 00:17:26.240 fused_ordering(135) 00:17:26.240 fused_ordering(136) 00:17:26.240 fused_ordering(137) 00:17:26.240 fused_ordering(138) 00:17:26.240 fused_ordering(139) 00:17:26.240 fused_ordering(140) 00:17:26.240 fused_ordering(141) 00:17:26.240 fused_ordering(142) 00:17:26.240 fused_ordering(143) 00:17:26.240 fused_ordering(144) 00:17:26.240 fused_ordering(145) 00:17:26.240 fused_ordering(146) 00:17:26.240 fused_ordering(147) 00:17:26.240 fused_ordering(148) 00:17:26.240 fused_ordering(149) 00:17:26.240 fused_ordering(150) 00:17:26.240 fused_ordering(151) 00:17:26.240 fused_ordering(152) 00:17:26.240 fused_ordering(153) 00:17:26.240 fused_ordering(154) 00:17:26.240 fused_ordering(155) 00:17:26.240 fused_ordering(156) 00:17:26.240 fused_ordering(157) 00:17:26.240 fused_ordering(158) 00:17:26.240 fused_ordering(159) 00:17:26.240 fused_ordering(160) 00:17:26.240 fused_ordering(161) 00:17:26.240 fused_ordering(162) 00:17:26.240 fused_ordering(163) 00:17:26.240 fused_ordering(164) 00:17:26.240 fused_ordering(165) 00:17:26.240 fused_ordering(166) 00:17:26.240 fused_ordering(167) 00:17:26.240 fused_ordering(168) 00:17:26.240 fused_ordering(169) 00:17:26.240 fused_ordering(170) 00:17:26.240 fused_ordering(171) 00:17:26.240 fused_ordering(172) 00:17:26.240 fused_ordering(173) 00:17:26.240 fused_ordering(174) 00:17:26.240 fused_ordering(175) 00:17:26.240 fused_ordering(176) 00:17:26.240 fused_ordering(177) 00:17:26.240 fused_ordering(178) 00:17:26.240 fused_ordering(179) 00:17:26.240 fused_ordering(180) 00:17:26.240 fused_ordering(181) 00:17:26.240 fused_ordering(182) 00:17:26.240 fused_ordering(183) 00:17:26.240 fused_ordering(184) 00:17:26.240 fused_ordering(185) 00:17:26.240 fused_ordering(186) 00:17:26.240 fused_ordering(187) 00:17:26.240 fused_ordering(188) 00:17:26.240 fused_ordering(189) 00:17:26.240 fused_ordering(190) 00:17:26.240 fused_ordering(191) 00:17:26.240 fused_ordering(192) 00:17:26.240 fused_ordering(193) 00:17:26.240 fused_ordering(194) 00:17:26.240 fused_ordering(195) 00:17:26.240 fused_ordering(196) 00:17:26.240 fused_ordering(197) 00:17:26.240 fused_ordering(198) 00:17:26.240 fused_ordering(199) 00:17:26.240 fused_ordering(200) 00:17:26.240 fused_ordering(201) 00:17:26.240 fused_ordering(202) 00:17:26.240 fused_ordering(203) 00:17:26.240 fused_ordering(204) 00:17:26.240 fused_ordering(205) 00:17:26.499 fused_ordering(206) 00:17:26.499 fused_ordering(207) 00:17:26.499 fused_ordering(208) 00:17:26.499 fused_ordering(209) 00:17:26.499 fused_ordering(210) 00:17:26.499 fused_ordering(211) 00:17:26.499 fused_ordering(212) 00:17:26.499 fused_ordering(213) 00:17:26.499 fused_ordering(214) 00:17:26.499 fused_ordering(215) 00:17:26.499 fused_ordering(216) 00:17:26.499 fused_ordering(217) 00:17:26.499 fused_ordering(218) 00:17:26.499 fused_ordering(219) 00:17:26.499 fused_ordering(220) 00:17:26.499 fused_ordering(221) 00:17:26.499 fused_ordering(222) 00:17:26.499 fused_ordering(223) 00:17:26.499 fused_ordering(224) 00:17:26.499 fused_ordering(225) 00:17:26.499 fused_ordering(226) 00:17:26.499 fused_ordering(227) 00:17:26.499 fused_ordering(228) 00:17:26.499 fused_ordering(229) 00:17:26.499 fused_ordering(230) 00:17:26.499 fused_ordering(231) 00:17:26.499 fused_ordering(232) 00:17:26.499 fused_ordering(233) 00:17:26.499 fused_ordering(234) 00:17:26.499 fused_ordering(235) 00:17:26.499 fused_ordering(236) 00:17:26.499 fused_ordering(237) 00:17:26.499 fused_ordering(238) 00:17:26.499 fused_ordering(239) 00:17:26.499 fused_ordering(240) 00:17:26.499 fused_ordering(241) 00:17:26.499 fused_ordering(242) 00:17:26.499 fused_ordering(243) 00:17:26.499 fused_ordering(244) 00:17:26.499 fused_ordering(245) 00:17:26.499 fused_ordering(246) 00:17:26.499 fused_ordering(247) 00:17:26.499 fused_ordering(248) 00:17:26.499 fused_ordering(249) 00:17:26.499 fused_ordering(250) 00:17:26.499 fused_ordering(251) 00:17:26.499 fused_ordering(252) 00:17:26.499 fused_ordering(253) 00:17:26.499 fused_ordering(254) 00:17:26.499 fused_ordering(255) 00:17:26.499 fused_ordering(256) 00:17:26.499 fused_ordering(257) 00:17:26.499 fused_ordering(258) 00:17:26.499 fused_ordering(259) 00:17:26.499 fused_ordering(260) 00:17:26.499 fused_ordering(261) 00:17:26.499 fused_ordering(262) 00:17:26.499 fused_ordering(263) 00:17:26.499 fused_ordering(264) 00:17:26.499 fused_ordering(265) 00:17:26.499 fused_ordering(266) 00:17:26.499 fused_ordering(267) 00:17:26.499 fused_ordering(268) 00:17:26.499 fused_ordering(269) 00:17:26.499 fused_ordering(270) 00:17:26.499 fused_ordering(271) 00:17:26.499 fused_ordering(272) 00:17:26.499 fused_ordering(273) 00:17:26.499 fused_ordering(274) 00:17:26.499 fused_ordering(275) 00:17:26.499 fused_ordering(276) 00:17:26.499 fused_ordering(277) 00:17:26.499 fused_ordering(278) 00:17:26.499 fused_ordering(279) 00:17:26.499 fused_ordering(280) 00:17:26.499 fused_ordering(281) 00:17:26.499 fused_ordering(282) 00:17:26.499 fused_ordering(283) 00:17:26.499 fused_ordering(284) 00:17:26.499 fused_ordering(285) 00:17:26.499 fused_ordering(286) 00:17:26.499 fused_ordering(287) 00:17:26.499 fused_ordering(288) 00:17:26.499 fused_ordering(289) 00:17:26.499 fused_ordering(290) 00:17:26.499 fused_ordering(291) 00:17:26.499 fused_ordering(292) 00:17:26.499 fused_ordering(293) 00:17:26.499 fused_ordering(294) 00:17:26.499 fused_ordering(295) 00:17:26.499 fused_ordering(296) 00:17:26.499 fused_ordering(297) 00:17:26.499 fused_ordering(298) 00:17:26.499 fused_ordering(299) 00:17:26.499 fused_ordering(300) 00:17:26.499 fused_ordering(301) 00:17:26.499 fused_ordering(302) 00:17:26.499 fused_ordering(303) 00:17:26.499 fused_ordering(304) 00:17:26.499 fused_ordering(305) 00:17:26.499 fused_ordering(306) 00:17:26.499 fused_ordering(307) 00:17:26.499 fused_ordering(308) 00:17:26.499 fused_ordering(309) 00:17:26.499 fused_ordering(310) 00:17:26.499 fused_ordering(311) 00:17:26.499 fused_ordering(312) 00:17:26.499 fused_ordering(313) 00:17:26.499 fused_ordering(314) 00:17:26.499 fused_ordering(315) 00:17:26.499 fused_ordering(316) 00:17:26.499 fused_ordering(317) 00:17:26.499 fused_ordering(318) 00:17:26.499 fused_ordering(319) 00:17:26.499 fused_ordering(320) 00:17:26.499 fused_ordering(321) 00:17:26.499 fused_ordering(322) 00:17:26.499 fused_ordering(323) 00:17:26.499 fused_ordering(324) 00:17:26.499 fused_ordering(325) 00:17:26.499 fused_ordering(326) 00:17:26.499 fused_ordering(327) 00:17:26.499 fused_ordering(328) 00:17:26.499 fused_ordering(329) 00:17:26.499 fused_ordering(330) 00:17:26.500 fused_ordering(331) 00:17:26.500 fused_ordering(332) 00:17:26.500 fused_ordering(333) 00:17:26.500 fused_ordering(334) 00:17:26.500 fused_ordering(335) 00:17:26.500 fused_ordering(336) 00:17:26.500 fused_ordering(337) 00:17:26.500 fused_ordering(338) 00:17:26.500 fused_ordering(339) 00:17:26.500 fused_ordering(340) 00:17:26.500 fused_ordering(341) 00:17:26.500 fused_ordering(342) 00:17:26.500 fused_ordering(343) 00:17:26.500 fused_ordering(344) 00:17:26.500 fused_ordering(345) 00:17:26.500 fused_ordering(346) 00:17:26.500 fused_ordering(347) 00:17:26.500 fused_ordering(348) 00:17:26.500 fused_ordering(349) 00:17:26.500 fused_ordering(350) 00:17:26.500 fused_ordering(351) 00:17:26.500 fused_ordering(352) 00:17:26.500 fused_ordering(353) 00:17:26.500 fused_ordering(354) 00:17:26.500 fused_ordering(355) 00:17:26.500 fused_ordering(356) 00:17:26.500 fused_ordering(357) 00:17:26.500 fused_ordering(358) 00:17:26.500 fused_ordering(359) 00:17:26.500 fused_ordering(360) 00:17:26.500 fused_ordering(361) 00:17:26.500 fused_ordering(362) 00:17:26.500 fused_ordering(363) 00:17:26.500 fused_ordering(364) 00:17:26.500 fused_ordering(365) 00:17:26.500 fused_ordering(366) 00:17:26.500 fused_ordering(367) 00:17:26.500 fused_ordering(368) 00:17:26.500 fused_ordering(369) 00:17:26.500 fused_ordering(370) 00:17:26.500 fused_ordering(371) 00:17:26.500 fused_ordering(372) 00:17:26.500 fused_ordering(373) 00:17:26.500 fused_ordering(374) 00:17:26.500 fused_ordering(375) 00:17:26.500 fused_ordering(376) 00:17:26.500 fused_ordering(377) 00:17:26.500 fused_ordering(378) 00:17:26.500 fused_ordering(379) 00:17:26.500 fused_ordering(380) 00:17:26.500 fused_ordering(381) 00:17:26.500 fused_ordering(382) 00:17:26.500 fused_ordering(383) 00:17:26.500 fused_ordering(384) 00:17:26.500 fused_ordering(385) 00:17:26.500 fused_ordering(386) 00:17:26.500 fused_ordering(387) 00:17:26.500 fused_ordering(388) 00:17:26.500 fused_ordering(389) 00:17:26.500 fused_ordering(390) 00:17:26.500 fused_ordering(391) 00:17:26.500 fused_ordering(392) 00:17:26.500 fused_ordering(393) 00:17:26.500 fused_ordering(394) 00:17:26.500 fused_ordering(395) 00:17:26.500 fused_ordering(396) 00:17:26.500 fused_ordering(397) 00:17:26.500 fused_ordering(398) 00:17:26.500 fused_ordering(399) 00:17:26.500 fused_ordering(400) 00:17:26.500 fused_ordering(401) 00:17:26.500 fused_ordering(402) 00:17:26.500 fused_ordering(403) 00:17:26.500 fused_ordering(404) 00:17:26.500 fused_ordering(405) 00:17:26.500 fused_ordering(406) 00:17:26.500 fused_ordering(407) 00:17:26.500 fused_ordering(408) 00:17:26.500 fused_ordering(409) 00:17:26.500 fused_ordering(410) 00:17:26.757 fused_ordering(411) 00:17:26.757 fused_ordering(412) 00:17:26.757 fused_ordering(413) 00:17:26.757 fused_ordering(414) 00:17:26.757 fused_ordering(415) 00:17:26.757 fused_ordering(416) 00:17:26.757 fused_ordering(417) 00:17:26.757 fused_ordering(418) 00:17:26.757 fused_ordering(419) 00:17:26.757 fused_ordering(420) 00:17:26.757 fused_ordering(421) 00:17:26.757 fused_ordering(422) 00:17:26.757 fused_ordering(423) 00:17:26.757 fused_ordering(424) 00:17:26.757 fused_ordering(425) 00:17:26.757 fused_ordering(426) 00:17:26.757 fused_ordering(427) 00:17:26.757 fused_ordering(428) 00:17:26.757 fused_ordering(429) 00:17:26.757 fused_ordering(430) 00:17:26.757 fused_ordering(431) 00:17:26.757 fused_ordering(432) 00:17:26.757 fused_ordering(433) 00:17:26.757 fused_ordering(434) 00:17:26.757 fused_ordering(435) 00:17:26.757 fused_ordering(436) 00:17:26.757 fused_ordering(437) 00:17:26.757 fused_ordering(438) 00:17:26.757 fused_ordering(439) 00:17:26.757 fused_ordering(440) 00:17:26.757 fused_ordering(441) 00:17:26.757 fused_ordering(442) 00:17:26.757 fused_ordering(443) 00:17:26.757 fused_ordering(444) 00:17:26.757 fused_ordering(445) 00:17:26.757 fused_ordering(446) 00:17:26.757 fused_ordering(447) 00:17:26.757 fused_ordering(448) 00:17:26.757 fused_ordering(449) 00:17:26.757 fused_ordering(450) 00:17:26.757 fused_ordering(451) 00:17:26.757 fused_ordering(452) 00:17:26.757 fused_ordering(453) 00:17:26.757 fused_ordering(454) 00:17:26.757 fused_ordering(455) 00:17:26.757 fused_ordering(456) 00:17:26.757 fused_ordering(457) 00:17:26.757 fused_ordering(458) 00:17:26.757 fused_ordering(459) 00:17:26.757 fused_ordering(460) 00:17:26.757 fused_ordering(461) 00:17:26.757 fused_ordering(462) 00:17:26.757 fused_ordering(463) 00:17:26.757 fused_ordering(464) 00:17:26.757 fused_ordering(465) 00:17:26.757 fused_ordering(466) 00:17:26.757 fused_ordering(467) 00:17:26.757 fused_ordering(468) 00:17:26.757 fused_ordering(469) 00:17:26.757 fused_ordering(470) 00:17:26.757 fused_ordering(471) 00:17:26.757 fused_ordering(472) 00:17:26.757 fused_ordering(473) 00:17:26.757 fused_ordering(474) 00:17:26.757 fused_ordering(475) 00:17:26.757 fused_ordering(476) 00:17:26.757 fused_ordering(477) 00:17:26.757 fused_ordering(478) 00:17:26.757 fused_ordering(479) 00:17:26.757 fused_ordering(480) 00:17:26.757 fused_ordering(481) 00:17:26.757 fused_ordering(482) 00:17:26.757 fused_ordering(483) 00:17:26.757 fused_ordering(484) 00:17:26.757 fused_ordering(485) 00:17:26.757 fused_ordering(486) 00:17:26.757 fused_ordering(487) 00:17:26.757 fused_ordering(488) 00:17:26.757 fused_ordering(489) 00:17:26.757 fused_ordering(490) 00:17:26.757 fused_ordering(491) 00:17:26.757 fused_ordering(492) 00:17:26.757 fused_ordering(493) 00:17:26.757 fused_ordering(494) 00:17:26.757 fused_ordering(495) 00:17:26.757 fused_ordering(496) 00:17:26.757 fused_ordering(497) 00:17:26.757 fused_ordering(498) 00:17:26.757 fused_ordering(499) 00:17:26.757 fused_ordering(500) 00:17:26.757 fused_ordering(501) 00:17:26.757 fused_ordering(502) 00:17:26.757 fused_ordering(503) 00:17:26.757 fused_ordering(504) 00:17:26.757 fused_ordering(505) 00:17:26.757 fused_ordering(506) 00:17:26.757 fused_ordering(507) 00:17:26.757 fused_ordering(508) 00:17:26.757 fused_ordering(509) 00:17:26.757 fused_ordering(510) 00:17:26.757 fused_ordering(511) 00:17:26.757 fused_ordering(512) 00:17:26.757 fused_ordering(513) 00:17:26.757 fused_ordering(514) 00:17:26.757 fused_ordering(515) 00:17:26.757 fused_ordering(516) 00:17:26.757 fused_ordering(517) 00:17:26.757 fused_ordering(518) 00:17:26.757 fused_ordering(519) 00:17:26.757 fused_ordering(520) 00:17:26.757 fused_ordering(521) 00:17:26.757 fused_ordering(522) 00:17:26.757 fused_ordering(523) 00:17:26.757 fused_ordering(524) 00:17:26.757 fused_ordering(525) 00:17:26.757 fused_ordering(526) 00:17:26.757 fused_ordering(527) 00:17:26.757 fused_ordering(528) 00:17:26.757 fused_ordering(529) 00:17:26.757 fused_ordering(530) 00:17:26.757 fused_ordering(531) 00:17:26.757 fused_ordering(532) 00:17:26.757 fused_ordering(533) 00:17:26.757 fused_ordering(534) 00:17:26.757 fused_ordering(535) 00:17:26.757 fused_ordering(536) 00:17:26.757 fused_ordering(537) 00:17:26.757 fused_ordering(538) 00:17:26.757 fused_ordering(539) 00:17:26.757 fused_ordering(540) 00:17:26.757 fused_ordering(541) 00:17:26.757 fused_ordering(542) 00:17:26.757 fused_ordering(543) 00:17:26.757 fused_ordering(544) 00:17:26.757 fused_ordering(545) 00:17:26.757 fused_ordering(546) 00:17:26.757 fused_ordering(547) 00:17:26.757 fused_ordering(548) 00:17:26.757 fused_ordering(549) 00:17:26.757 fused_ordering(550) 00:17:26.757 fused_ordering(551) 00:17:26.757 fused_ordering(552) 00:17:26.757 fused_ordering(553) 00:17:26.757 fused_ordering(554) 00:17:26.757 fused_ordering(555) 00:17:26.757 fused_ordering(556) 00:17:26.757 fused_ordering(557) 00:17:26.757 fused_ordering(558) 00:17:26.757 fused_ordering(559) 00:17:26.757 fused_ordering(560) 00:17:26.757 fused_ordering(561) 00:17:26.757 fused_ordering(562) 00:17:26.757 fused_ordering(563) 00:17:26.757 fused_ordering(564) 00:17:26.757 fused_ordering(565) 00:17:26.757 fused_ordering(566) 00:17:26.757 fused_ordering(567) 00:17:26.757 fused_ordering(568) 00:17:26.757 fused_ordering(569) 00:17:26.757 fused_ordering(570) 00:17:26.757 fused_ordering(571) 00:17:26.757 fused_ordering(572) 00:17:26.757 fused_ordering(573) 00:17:26.757 fused_ordering(574) 00:17:26.757 fused_ordering(575) 00:17:26.757 fused_ordering(576) 00:17:26.757 fused_ordering(577) 00:17:26.757 fused_ordering(578) 00:17:26.757 fused_ordering(579) 00:17:26.757 fused_ordering(580) 00:17:26.757 fused_ordering(581) 00:17:26.757 fused_ordering(582) 00:17:26.757 fused_ordering(583) 00:17:26.757 fused_ordering(584) 00:17:26.757 fused_ordering(585) 00:17:26.757 fused_ordering(586) 00:17:26.757 fused_ordering(587) 00:17:26.757 fused_ordering(588) 00:17:26.757 fused_ordering(589) 00:17:26.757 fused_ordering(590) 00:17:26.757 fused_ordering(591) 00:17:26.757 fused_ordering(592) 00:17:26.757 fused_ordering(593) 00:17:26.757 fused_ordering(594) 00:17:26.757 fused_ordering(595) 00:17:26.757 fused_ordering(596) 00:17:26.757 fused_ordering(597) 00:17:26.757 fused_ordering(598) 00:17:26.757 fused_ordering(599) 00:17:26.757 fused_ordering(600) 00:17:26.757 fused_ordering(601) 00:17:26.757 fused_ordering(602) 00:17:26.758 fused_ordering(603) 00:17:26.758 fused_ordering(604) 00:17:26.758 fused_ordering(605) 00:17:26.758 fused_ordering(606) 00:17:26.758 fused_ordering(607) 00:17:26.758 fused_ordering(608) 00:17:26.758 fused_ordering(609) 00:17:26.758 fused_ordering(610) 00:17:26.758 fused_ordering(611) 00:17:26.758 fused_ordering(612) 00:17:26.758 fused_ordering(613) 00:17:26.758 fused_ordering(614) 00:17:26.758 fused_ordering(615) 00:17:27.324 fused_ordering(616) 00:17:27.324 fused_ordering(617) 00:17:27.324 fused_ordering(618) 00:17:27.324 fused_ordering(619) 00:17:27.324 fused_ordering(620) 00:17:27.324 fused_ordering(621) 00:17:27.324 fused_ordering(622) 00:17:27.324 fused_ordering(623) 00:17:27.324 fused_ordering(624) 00:17:27.324 fused_ordering(625) 00:17:27.324 fused_ordering(626) 00:17:27.324 fused_ordering(627) 00:17:27.324 fused_ordering(628) 00:17:27.324 fused_ordering(629) 00:17:27.324 fused_ordering(630) 00:17:27.324 fused_ordering(631) 00:17:27.324 fused_ordering(632) 00:17:27.324 fused_ordering(633) 00:17:27.324 fused_ordering(634) 00:17:27.324 fused_ordering(635) 00:17:27.324 fused_ordering(636) 00:17:27.324 fused_ordering(637) 00:17:27.324 fused_ordering(638) 00:17:27.324 fused_ordering(639) 00:17:27.324 fused_ordering(640) 00:17:27.324 fused_ordering(641) 00:17:27.324 fused_ordering(642) 00:17:27.324 fused_ordering(643) 00:17:27.324 fused_ordering(644) 00:17:27.324 fused_ordering(645) 00:17:27.324 fused_ordering(646) 00:17:27.324 fused_ordering(647) 00:17:27.324 fused_ordering(648) 00:17:27.324 fused_ordering(649) 00:17:27.324 fused_ordering(650) 00:17:27.324 fused_ordering(651) 00:17:27.324 fused_ordering(652) 00:17:27.324 fused_ordering(653) 00:17:27.324 fused_ordering(654) 00:17:27.324 fused_ordering(655) 00:17:27.324 fused_ordering(656) 00:17:27.324 fused_ordering(657) 00:17:27.324 fused_ordering(658) 00:17:27.324 fused_ordering(659) 00:17:27.324 fused_ordering(660) 00:17:27.324 fused_ordering(661) 00:17:27.324 fused_ordering(662) 00:17:27.324 fused_ordering(663) 00:17:27.324 fused_ordering(664) 00:17:27.324 fused_ordering(665) 00:17:27.324 fused_ordering(666) 00:17:27.324 fused_ordering(667) 00:17:27.324 fused_ordering(668) 00:17:27.324 fused_ordering(669) 00:17:27.324 fused_ordering(670) 00:17:27.324 fused_ordering(671) 00:17:27.324 fused_ordering(672) 00:17:27.324 fused_ordering(673) 00:17:27.324 fused_ordering(674) 00:17:27.324 fused_ordering(675) 00:17:27.324 fused_ordering(676) 00:17:27.324 fused_ordering(677) 00:17:27.324 fused_ordering(678) 00:17:27.324 fused_ordering(679) 00:17:27.324 fused_ordering(680) 00:17:27.324 fused_ordering(681) 00:17:27.324 fused_ordering(682) 00:17:27.324 fused_ordering(683) 00:17:27.324 fused_ordering(684) 00:17:27.324 fused_ordering(685) 00:17:27.324 fused_ordering(686) 00:17:27.324 fused_ordering(687) 00:17:27.324 fused_ordering(688) 00:17:27.324 fused_ordering(689) 00:17:27.324 fused_ordering(690) 00:17:27.324 fused_ordering(691) 00:17:27.324 fused_ordering(692) 00:17:27.324 fused_ordering(693) 00:17:27.324 fused_ordering(694) 00:17:27.324 fused_ordering(695) 00:17:27.324 fused_ordering(696) 00:17:27.324 fused_ordering(697) 00:17:27.324 fused_ordering(698) 00:17:27.324 fused_ordering(699) 00:17:27.324 fused_ordering(700) 00:17:27.324 fused_ordering(701) 00:17:27.324 fused_ordering(702) 00:17:27.324 fused_ordering(703) 00:17:27.324 fused_ordering(704) 00:17:27.324 fused_ordering(705) 00:17:27.324 fused_ordering(706) 00:17:27.324 fused_ordering(707) 00:17:27.324 fused_ordering(708) 00:17:27.324 fused_ordering(709) 00:17:27.324 fused_ordering(710) 00:17:27.324 fused_ordering(711) 00:17:27.324 fused_ordering(712) 00:17:27.324 fused_ordering(713) 00:17:27.324 fused_ordering(714) 00:17:27.324 fused_ordering(715) 00:17:27.324 fused_ordering(716) 00:17:27.324 fused_ordering(717) 00:17:27.324 fused_ordering(718) 00:17:27.324 fused_ordering(719) 00:17:27.324 fused_ordering(720) 00:17:27.324 fused_ordering(721) 00:17:27.324 fused_ordering(722) 00:17:27.324 fused_ordering(723) 00:17:27.324 fused_ordering(724) 00:17:27.324 fused_ordering(725) 00:17:27.324 fused_ordering(726) 00:17:27.324 fused_ordering(727) 00:17:27.324 fused_ordering(728) 00:17:27.324 fused_ordering(729) 00:17:27.324 fused_ordering(730) 00:17:27.324 fused_ordering(731) 00:17:27.324 fused_ordering(732) 00:17:27.324 fused_ordering(733) 00:17:27.324 fused_ordering(734) 00:17:27.324 fused_ordering(735) 00:17:27.324 fused_ordering(736) 00:17:27.325 fused_ordering(737) 00:17:27.325 fused_ordering(738) 00:17:27.325 fused_ordering(739) 00:17:27.325 fused_ordering(740) 00:17:27.325 fused_ordering(741) 00:17:27.325 fused_ordering(742) 00:17:27.325 fused_ordering(743) 00:17:27.325 fused_ordering(744) 00:17:27.325 fused_ordering(745) 00:17:27.325 fused_ordering(746) 00:17:27.325 fused_ordering(747) 00:17:27.325 fused_ordering(748) 00:17:27.325 fused_ordering(749) 00:17:27.325 fused_ordering(750) 00:17:27.325 fused_ordering(751) 00:17:27.325 fused_ordering(752) 00:17:27.325 fused_ordering(753) 00:17:27.325 fused_ordering(754) 00:17:27.325 fused_ordering(755) 00:17:27.325 fused_ordering(756) 00:17:27.325 fused_ordering(757) 00:17:27.325 fused_ordering(758) 00:17:27.325 fused_ordering(759) 00:17:27.325 fused_ordering(760) 00:17:27.325 fused_ordering(761) 00:17:27.325 fused_ordering(762) 00:17:27.325 fused_ordering(763) 00:17:27.325 fused_ordering(764) 00:17:27.325 fused_ordering(765) 00:17:27.325 fused_ordering(766) 00:17:27.325 fused_ordering(767) 00:17:27.325 fused_ordering(768) 00:17:27.325 fused_ordering(769) 00:17:27.325 fused_ordering(770) 00:17:27.325 fused_ordering(771) 00:17:27.325 fused_ordering(772) 00:17:27.325 fused_ordering(773) 00:17:27.325 fused_ordering(774) 00:17:27.325 fused_ordering(775) 00:17:27.325 fused_ordering(776) 00:17:27.325 fused_ordering(777) 00:17:27.325 fused_ordering(778) 00:17:27.325 fused_ordering(779) 00:17:27.325 fused_ordering(780) 00:17:27.325 fused_ordering(781) 00:17:27.325 fused_ordering(782) 00:17:27.325 fused_ordering(783) 00:17:27.325 fused_ordering(784) 00:17:27.325 fused_ordering(785) 00:17:27.325 fused_ordering(786) 00:17:27.325 fused_ordering(787) 00:17:27.325 fused_ordering(788) 00:17:27.325 fused_ordering(789) 00:17:27.325 fused_ordering(790) 00:17:27.325 fused_ordering(791) 00:17:27.325 fused_ordering(792) 00:17:27.325 fused_ordering(793) 00:17:27.325 fused_ordering(794) 00:17:27.325 fused_ordering(795) 00:17:27.325 fused_ordering(796) 00:17:27.325 fused_ordering(797) 00:17:27.325 fused_ordering(798) 00:17:27.325 fused_ordering(799) 00:17:27.325 fused_ordering(800) 00:17:27.325 fused_ordering(801) 00:17:27.325 fused_ordering(802) 00:17:27.325 fused_ordering(803) 00:17:27.325 fused_ordering(804) 00:17:27.325 fused_ordering(805) 00:17:27.325 fused_ordering(806) 00:17:27.325 fused_ordering(807) 00:17:27.325 fused_ordering(808) 00:17:27.325 fused_ordering(809) 00:17:27.325 fused_ordering(810) 00:17:27.325 fused_ordering(811) 00:17:27.325 fused_ordering(812) 00:17:27.325 fused_ordering(813) 00:17:27.325 fused_ordering(814) 00:17:27.325 fused_ordering(815) 00:17:27.325 fused_ordering(816) 00:17:27.325 fused_ordering(817) 00:17:27.325 fused_ordering(818) 00:17:27.325 fused_ordering(819) 00:17:27.325 fused_ordering(820) 00:17:27.583 fused_ordering(821) 00:17:27.583 fused_ordering(822) 00:17:27.583 fused_ordering(823) 00:17:27.583 fused_ordering(824) 00:17:27.583 fused_ordering(825) 00:17:27.583 fused_ordering(826) 00:17:27.583 fused_ordering(827) 00:17:27.583 fused_ordering(828) 00:17:27.583 fused_ordering(829) 00:17:27.583 fused_ordering(830) 00:17:27.583 fused_ordering(831) 00:17:27.583 fused_ordering(832) 00:17:27.583 fused_ordering(833) 00:17:27.583 fused_ordering(834) 00:17:27.583 fused_ordering(835) 00:17:27.583 fused_ordering(836) 00:17:27.583 fused_ordering(837) 00:17:27.583 fused_ordering(838) 00:17:27.583 fused_ordering(839) 00:17:27.583 fused_ordering(840) 00:17:27.583 fused_ordering(841) 00:17:27.583 fused_ordering(842) 00:17:27.583 fused_ordering(843) 00:17:27.583 fused_ordering(844) 00:17:27.583 fused_ordering(845) 00:17:27.583 fused_ordering(846) 00:17:27.583 fused_ordering(847) 00:17:27.583 fused_ordering(848) 00:17:27.583 fused_ordering(849) 00:17:27.583 fused_ordering(850) 00:17:27.583 fused_ordering(851) 00:17:27.583 fused_ordering(852) 00:17:27.583 fused_ordering(853) 00:17:27.583 fused_ordering(854) 00:17:27.583 fused_ordering(855) 00:17:27.583 fused_ordering(856) 00:17:27.583 fused_ordering(857) 00:17:27.583 fused_ordering(858) 00:17:27.583 fused_ordering(859) 00:17:27.583 fused_ordering(860) 00:17:27.583 fused_ordering(861) 00:17:27.583 fused_ordering(862) 00:17:27.583 fused_ordering(863) 00:17:27.583 fused_ordering(864) 00:17:27.583 fused_ordering(865) 00:17:27.583 fused_ordering(866) 00:17:27.583 fused_ordering(867) 00:17:27.583 fused_ordering(868) 00:17:27.583 fused_ordering(869) 00:17:27.583 fused_ordering(870) 00:17:27.583 fused_ordering(871) 00:17:27.583 fused_ordering(872) 00:17:27.583 fused_ordering(873) 00:17:27.583 fused_ordering(874) 00:17:27.583 fused_ordering(875) 00:17:27.583 fused_ordering(876) 00:17:27.583 fused_ordering(877) 00:17:27.583 fused_ordering(878) 00:17:27.583 fused_ordering(879) 00:17:27.583 fused_ordering(880) 00:17:27.583 fused_ordering(881) 00:17:27.583 fused_ordering(882) 00:17:27.583 fused_ordering(883) 00:17:27.583 fused_ordering(884) 00:17:27.583 fused_ordering(885) 00:17:27.583 fused_ordering(886) 00:17:27.583 fused_ordering(887) 00:17:27.583 fused_ordering(888) 00:17:27.583 fused_ordering(889) 00:17:27.583 fused_ordering(890) 00:17:27.583 fused_ordering(891) 00:17:27.583 fused_ordering(892) 00:17:27.583 fused_ordering(893) 00:17:27.583 fused_ordering(894) 00:17:27.583 fused_ordering(895) 00:17:27.583 fused_ordering(896) 00:17:27.583 fused_ordering(897) 00:17:27.583 fused_ordering(898) 00:17:27.583 fused_ordering(899) 00:17:27.583 fused_ordering(900) 00:17:27.583 fused_ordering(901) 00:17:27.583 fused_ordering(902) 00:17:27.583 fused_ordering(903) 00:17:27.583 fused_ordering(904) 00:17:27.583 fused_ordering(905) 00:17:27.583 fused_ordering(906) 00:17:27.583 fused_ordering(907) 00:17:27.583 fused_ordering(908) 00:17:27.583 fused_ordering(909) 00:17:27.583 fused_ordering(910) 00:17:27.583 fused_ordering(911) 00:17:27.583 fused_ordering(912) 00:17:27.583 fused_ordering(913) 00:17:27.583 fused_ordering(914) 00:17:27.583 fused_ordering(915) 00:17:27.584 fused_ordering(916) 00:17:27.584 fused_ordering(917) 00:17:27.584 fused_ordering(918) 00:17:27.584 fused_ordering(919) 00:17:27.584 fused_ordering(920) 00:17:27.584 fused_ordering(921) 00:17:27.584 fused_ordering(922) 00:17:27.584 fused_ordering(923) 00:17:27.584 fused_ordering(924) 00:17:27.584 fused_ordering(925) 00:17:27.584 fused_ordering(926) 00:17:27.584 fused_ordering(927) 00:17:27.584 fused_ordering(928) 00:17:27.584 fused_ordering(929) 00:17:27.584 fused_ordering(930) 00:17:27.584 fused_ordering(931) 00:17:27.584 fused_ordering(932) 00:17:27.584 fused_ordering(933) 00:17:27.584 fused_ordering(934) 00:17:27.584 fused_ordering(935) 00:17:27.584 fused_ordering(936) 00:17:27.584 fused_ordering(937) 00:17:27.584 fused_ordering(938) 00:17:27.584 fused_ordering(939) 00:17:27.584 fused_ordering(940) 00:17:27.584 fused_ordering(941) 00:17:27.584 fused_ordering(942) 00:17:27.584 fused_ordering(943) 00:17:27.584 fused_ordering(944) 00:17:27.584 fused_ordering(945) 00:17:27.584 fused_ordering(946) 00:17:27.584 fused_ordering(947) 00:17:27.584 fused_ordering(948) 00:17:27.584 fused_ordering(949) 00:17:27.584 fused_ordering(950) 00:17:27.584 fused_ordering(951) 00:17:27.584 fused_ordering(952) 00:17:27.584 fused_ordering(953) 00:17:27.584 fused_ordering(954) 00:17:27.584 fused_ordering(955) 00:17:27.584 fused_ordering(956) 00:17:27.584 fused_ordering(957) 00:17:27.584 fused_ordering(958) 00:17:27.584 fused_ordering(959) 00:17:27.584 fused_ordering(960) 00:17:27.584 fused_ordering(961) 00:17:27.584 fused_ordering(962) 00:17:27.584 fused_ordering(963) 00:17:27.584 fused_ordering(964) 00:17:27.584 fused_ordering(965) 00:17:27.584 fused_ordering(966) 00:17:27.584 fused_ordering(967) 00:17:27.584 fused_ordering(968) 00:17:27.584 fused_ordering(969) 00:17:27.584 fused_ordering(970) 00:17:27.584 fused_ordering(971) 00:17:27.584 fused_ordering(972) 00:17:27.584 fused_ordering(973) 00:17:27.584 fused_ordering(974) 00:17:27.584 fused_ordering(975) 00:17:27.584 fused_ordering(976) 00:17:27.584 fused_ordering(977) 00:17:27.584 fused_ordering(978) 00:17:27.584 fused_ordering(979) 00:17:27.584 fused_ordering(980) 00:17:27.584 fused_ordering(981) 00:17:27.584 fused_ordering(982) 00:17:27.584 fused_ordering(983) 00:17:27.584 fused_ordering(984) 00:17:27.584 fused_ordering(985) 00:17:27.584 fused_ordering(986) 00:17:27.584 fused_ordering(987) 00:17:27.584 fused_ordering(988) 00:17:27.584 fused_ordering(989) 00:17:27.584 fused_ordering(990) 00:17:27.584 fused_ordering(991) 00:17:27.584 fused_ordering(992) 00:17:27.584 fused_ordering(993) 00:17:27.584 fused_ordering(994) 00:17:27.584 fused_ordering(995) 00:17:27.584 fused_ordering(996) 00:17:27.584 fused_ordering(997) 00:17:27.584 fused_ordering(998) 00:17:27.584 fused_ordering(999) 00:17:27.584 fused_ordering(1000) 00:17:27.584 fused_ordering(1001) 00:17:27.584 fused_ordering(1002) 00:17:27.584 fused_ordering(1003) 00:17:27.584 fused_ordering(1004) 00:17:27.584 fused_ordering(1005) 00:17:27.584 fused_ordering(1006) 00:17:27.584 fused_ordering(1007) 00:17:27.584 fused_ordering(1008) 00:17:27.584 fused_ordering(1009) 00:17:27.584 fused_ordering(1010) 00:17:27.584 fused_ordering(1011) 00:17:27.584 fused_ordering(1012) 00:17:27.584 fused_ordering(1013) 00:17:27.584 fused_ordering(1014) 00:17:27.584 fused_ordering(1015) 00:17:27.584 fused_ordering(1016) 00:17:27.584 fused_ordering(1017) 00:17:27.584 fused_ordering(1018) 00:17:27.584 fused_ordering(1019) 00:17:27.584 fused_ordering(1020) 00:17:27.584 fused_ordering(1021) 00:17:27.584 fused_ordering(1022) 00:17:27.584 fused_ordering(1023) 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.584 rmmod nvme_tcp 00:17:27.584 rmmod nvme_fabrics 00:17:27.584 rmmod nvme_keyring 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 952071 ']' 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 952071 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 952071 ']' 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 952071 00:17:27.584 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952071 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952071' 00:17:27.843 killing process with pid 952071 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 952071 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 952071 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.843 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.379 00:17:30.379 real 0m10.695s 00:17:30.379 user 0m5.088s 00:17:30.379 sys 0m5.828s 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.379 ************************************ 00:17:30.379 END TEST nvmf_fused_ordering 00:17:30.379 ************************************ 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.379 ************************************ 00:17:30.379 START TEST nvmf_ns_masking 00:17:30.379 ************************************ 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:30.379 * Looking for test storage... 00:17:30.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.379 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.380 --rc genhtml_branch_coverage=1 00:17:30.380 --rc genhtml_function_coverage=1 00:17:30.380 --rc genhtml_legend=1 00:17:30.380 --rc geninfo_all_blocks=1 00:17:30.380 --rc geninfo_unexecuted_blocks=1 00:17:30.380 00:17:30.380 ' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.380 --rc genhtml_branch_coverage=1 00:17:30.380 --rc genhtml_function_coverage=1 00:17:30.380 --rc genhtml_legend=1 00:17:30.380 --rc geninfo_all_blocks=1 00:17:30.380 --rc geninfo_unexecuted_blocks=1 00:17:30.380 00:17:30.380 ' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.380 --rc genhtml_branch_coverage=1 00:17:30.380 --rc genhtml_function_coverage=1 00:17:30.380 --rc genhtml_legend=1 00:17:30.380 --rc geninfo_all_blocks=1 00:17:30.380 --rc geninfo_unexecuted_blocks=1 00:17:30.380 00:17:30.380 ' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.380 --rc genhtml_branch_coverage=1 00:17:30.380 --rc genhtml_function_coverage=1 00:17:30.380 --rc genhtml_legend=1 00:17:30.380 --rc geninfo_all_blocks=1 00:17:30.380 --rc geninfo_unexecuted_blocks=1 00:17:30.380 00:17:30.380 ' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ba5369e5-764c-485a-8ea2-a82061bd5684 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3eb6b37c-ab42-4d95-96d4-81eed0355f54 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=877665cc-a5c8-4485-b2d6-8ca409496279 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.380 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.381 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:37.042 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:37.042 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.042 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:37.043 Found net devices under 0000:af:00.0: cvl_0_0 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:37.043 Found net devices under 0000:af:00.1: cvl_0_1 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:17:37.043 00:17:37.043 --- 10.0.0.2 ping statistics --- 00:17:37.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.043 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:17:37.043 00:17:37.043 --- 10.0.0.1 ping statistics --- 00:17:37.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.043 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=956006 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 956006 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 956006 ']' 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.043 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.043 [2024-12-16 16:23:24.859451] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:37.043 [2024-12-16 16:23:24.859493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.043 [2024-12-16 16:23:24.937940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.043 [2024-12-16 16:23:24.959294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.043 [2024-12-16 16:23:24.959329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.043 [2024-12-16 16:23:24.959336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.043 [2024-12-16 16:23:24.959342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.043 [2024-12-16 16:23:24.959346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.043 [2024-12-16 16:23:24.959843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:37.043 [2024-12-16 16:23:25.262958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:37.043 Malloc1 00:17:37.043 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:37.303 Malloc2 00:17:37.303 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:37.561 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:37.561 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.820 [2024-12-16 16:23:26.270291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.820 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:37.820 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 877665cc-a5c8-4485-b2d6-8ca409496279 -a 10.0.0.2 -s 4420 -i 4 00:17:38.079 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:38.079 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:38.079 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:38.079 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:38.079 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.982 [ 0]:0x1 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ffe8be7e165c484786f0eec7860f48ab 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ffe8be7e165c484786f0eec7860f48ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.982 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:40.242 [ 0]:0x1 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ffe8be7e165c484786f0eec7860f48ab 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ffe8be7e165c484786f0eec7860f48ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.242 [ 1]:0x2 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.242 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.501 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:40.501 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.501 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:40.501 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.760 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.760 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:41.019 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:41.019 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 877665cc-a5c8-4485-b2d6-8ca409496279 -a 10.0.0.2 -s 4420 -i 4 00:17:41.019 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:41.019 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:41.019 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.277 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:41.277 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:41.277 16:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.181 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.440 [ 0]:0x2 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.440 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.699 [ 0]:0x1 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ffe8be7e165c484786f0eec7860f48ab 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ffe8be7e165c484786f0eec7860f48ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.699 [ 1]:0x2 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.699 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.957 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:43.957 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.958 [ 0]:0x2 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:43.958 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.217 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:44.217 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:44.217 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 877665cc-a5c8-4485-b2d6-8ca409496279 -a 10.0.0.2 -s 4420 -i 4 00:17:44.476 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:44.476 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:44.476 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.476 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:44.476 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:44.476 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:46.379 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.649 [ 0]:0x1 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ffe8be7e165c484786f0eec7860f48ab 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ffe8be7e165c484786f0eec7860f48ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.649 [ 1]:0x2 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.649 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.908 [ 0]:0x2 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:46.908 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:47.167 [2024-12-16 16:23:35.689534] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:47.167 request: 00:17:47.167 { 00:17:47.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.167 "nsid": 2, 00:17:47.167 "host": "nqn.2016-06.io.spdk:host1", 00:17:47.167 "method": "nvmf_ns_remove_host", 00:17:47.167 "req_id": 1 00:17:47.167 } 00:17:47.167 Got JSON-RPC error response 00:17:47.167 response: 00:17:47.167 { 00:17:47.167 "code": -32602, 00:17:47.167 "message": "Invalid parameters" 00:17:47.167 } 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.167 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.426 [ 0]:0x2 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af9e36b640594540a7afbf487ba56efc 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af9e36b640594540a7afbf487ba56efc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=957946 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 957946 /var/tmp/host.sock 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 957946 ']' 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.426 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.426 [2024-12-16 16:23:35.921302] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:47.426 [2024-12-16 16:23:35.921346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957946 ] 00:17:47.426 [2024-12-16 16:23:35.996831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.426 [2024-12-16 16:23:36.018824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.685 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.685 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:47.685 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.943 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:48.201 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ba5369e5-764c-485a-8ea2-a82061bd5684 00:17:48.201 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:48.201 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BA5369E5764C485A8EA2A82061BD5684 -i 00:17:48.460 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3eb6b37c-ab42-4d95-96d4-81eed0355f54 00:17:48.460 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:48.460 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3EB6B37CAB424D9596D481EED0355F54 -i 00:17:48.460 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:48.718 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:48.977 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:48.977 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:49.236 nvme0n1 00:17:49.236 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:49.236 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:49.494 nvme1n2 00:17:49.494 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:49.494 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:49.494 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:49.494 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:49.494 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:49.752 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:49.752 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:49.752 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:49.752 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:50.011 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ba5369e5-764c-485a-8ea2-a82061bd5684 == \b\a\5\3\6\9\e\5\-\7\6\4\c\-\4\8\5\a\-\8\e\a\2\-\a\8\2\0\6\1\b\d\5\6\8\4 ]] 00:17:50.011 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:50.011 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:50.011 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:50.270 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3eb6b37c-ab42-4d95-96d4-81eed0355f54 == \3\e\b\6\b\3\7\c\-\a\b\4\2\-\4\d\9\5\-\9\6\d\4\-\8\1\e\e\d\0\3\5\5\f\5\4 ]] 00:17:50.270 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.270 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ba5369e5-764c-485a-8ea2-a82061bd5684 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BA5369E5764C485A8EA2A82061BD5684 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BA5369E5764C485A8EA2A82061BD5684 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:50.529 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BA5369E5764C485A8EA2A82061BD5684 00:17:50.788 [2024-12-16 16:23:39.195269] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:50.788 [2024-12-16 16:23:39.195300] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:50.788 [2024-12-16 16:23:39.195309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:50.788 request: 00:17:50.788 { 00:17:50.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.788 "namespace": { 00:17:50.788 "bdev_name": "invalid", 00:17:50.788 "nsid": 1, 00:17:50.788 "nguid": "BA5369E5764C485A8EA2A82061BD5684", 00:17:50.788 "no_auto_visible": false, 00:17:50.788 "hide_metadata": false 00:17:50.788 }, 00:17:50.788 "method": "nvmf_subsystem_add_ns", 00:17:50.788 "req_id": 1 00:17:50.788 } 00:17:50.788 Got JSON-RPC error response 00:17:50.788 response: 00:17:50.788 { 00:17:50.788 "code": -32602, 00:17:50.788 "message": "Invalid parameters" 00:17:50.788 } 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ba5369e5-764c-485a-8ea2-a82061bd5684 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:50.788 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BA5369E5764C485A8EA2A82061BD5684 -i 00:17:51.047 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:52.950 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:52.950 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:52.950 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 957946 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 957946 ']' 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 957946 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957946 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957946' 00:17:53.209 killing process with pid 957946 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 957946 00:17:53.209 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 957946 00:17:53.468 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.727 rmmod nvme_tcp 00:17:53.727 rmmod nvme_fabrics 00:17:53.727 rmmod nvme_keyring 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 956006 ']' 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 956006 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 956006 ']' 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 956006 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956006 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956006' 00:17:53.727 killing process with pid 956006 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 956006 00:17:53.727 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 956006 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.986 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:56.522 00:17:56.522 real 0m26.036s 00:17:56.522 user 0m30.952s 00:17:56.522 sys 0m7.001s 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:56.522 ************************************ 00:17:56.522 END TEST nvmf_ns_masking 00:17:56.522 ************************************ 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:56.522 ************************************ 00:17:56.522 START TEST nvmf_nvme_cli 00:17:56.522 ************************************ 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:56.522 * Looking for test storage... 00:17:56.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:56.522 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:56.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.523 --rc genhtml_branch_coverage=1 00:17:56.523 --rc genhtml_function_coverage=1 00:17:56.523 --rc genhtml_legend=1 00:17:56.523 --rc geninfo_all_blocks=1 00:17:56.523 --rc geninfo_unexecuted_blocks=1 00:17:56.523 00:17:56.523 ' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:56.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.523 --rc genhtml_branch_coverage=1 00:17:56.523 --rc genhtml_function_coverage=1 00:17:56.523 --rc genhtml_legend=1 00:17:56.523 --rc geninfo_all_blocks=1 00:17:56.523 --rc geninfo_unexecuted_blocks=1 00:17:56.523 00:17:56.523 ' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:56.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.523 --rc genhtml_branch_coverage=1 00:17:56.523 --rc genhtml_function_coverage=1 00:17:56.523 --rc genhtml_legend=1 00:17:56.523 --rc geninfo_all_blocks=1 00:17:56.523 --rc geninfo_unexecuted_blocks=1 00:17:56.523 00:17:56.523 ' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:56.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.523 --rc genhtml_branch_coverage=1 00:17:56.523 --rc genhtml_function_coverage=1 00:17:56.523 --rc genhtml_legend=1 00:17:56.523 --rc geninfo_all_blocks=1 00:17:56.523 --rc geninfo_unexecuted_blocks=1 00:17:56.523 00:17:56.523 ' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:56.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:56.523 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:03.093 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:03.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:03.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:03.094 Found net devices under 0000:af:00.0: cvl_0_0 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:03.094 Found net devices under 0000:af:00.1: cvl_0_1 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:03.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:18:03.094 00:18:03.094 --- 10.0.0.2 ping statistics --- 00:18:03.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.094 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:18:03.094 00:18:03.094 --- 10.0.0.1 ping statistics --- 00:18:03.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.094 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=962569 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 962569 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 962569 ']' 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.094 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.094 [2024-12-16 16:23:50.867655] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:03.094 [2024-12-16 16:23:50.867700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.094 [2024-12-16 16:23:50.948087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.094 [2024-12-16 16:23:50.971741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.094 [2024-12-16 16:23:50.971780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.094 [2024-12-16 16:23:50.971787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.094 [2024-12-16 16:23:50.971794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.094 [2024-12-16 16:23:50.971799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.094 [2024-12-16 16:23:50.973134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.094 [2024-12-16 16:23:50.973186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.094 [2024-12-16 16:23:50.973270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.094 [2024-12-16 16:23:50.973271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.094 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.094 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:03.094 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 [2024-12-16 16:23:51.113426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 Malloc0 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 Malloc1 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 [2024-12-16 16:23:51.214933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:03.095 00:18:03.095 Discovery Log Number of Records 2, Generation counter 2 00:18:03.095 =====Discovery Log Entry 0====== 00:18:03.095 trtype: tcp 00:18:03.095 adrfam: ipv4 00:18:03.095 subtype: current discovery subsystem 00:18:03.095 treq: not required 00:18:03.095 portid: 0 00:18:03.095 trsvcid: 4420 00:18:03.095 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:03.095 traddr: 10.0.0.2 00:18:03.095 eflags: explicit discovery connections, duplicate discovery information 00:18:03.095 sectype: none 00:18:03.095 =====Discovery Log Entry 1====== 00:18:03.095 trtype: tcp 00:18:03.095 adrfam: ipv4 00:18:03.095 subtype: nvme subsystem 00:18:03.095 treq: not required 00:18:03.095 portid: 0 00:18:03.095 trsvcid: 4420 00:18:03.095 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:03.095 traddr: 10.0.0.2 00:18:03.095 eflags: none 00:18:03.095 sectype: none 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:03.095 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:04.031 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:04.031 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:04.031 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.031 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:04.031 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:04.031 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:05.933 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:05.933 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:05.933 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:06.192 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:06.193 /dev/nvme0n2 ]] 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.193 rmmod nvme_tcp 00:18:06.193 rmmod nvme_fabrics 00:18:06.193 rmmod nvme_keyring 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 962569 ']' 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 962569 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 962569 ']' 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 962569 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.193 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962569 00:18:06.452 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.452 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.452 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962569' 00:18:06.452 killing process with pid 962569 00:18:06.452 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 962569 00:18:06.452 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 962569 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.452 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.453 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.992 00:18:08.992 real 0m12.444s 00:18:08.992 user 0m17.808s 00:18:08.992 sys 0m5.055s 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:08.992 ************************************ 00:18:08.992 END TEST nvmf_nvme_cli 00:18:08.992 ************************************ 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.992 ************************************ 00:18:08.992 START TEST nvmf_vfio_user 00:18:08.992 ************************************ 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:08.992 * Looking for test storage... 00:18:08.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:08.992 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.993 --rc genhtml_branch_coverage=1 00:18:08.993 --rc genhtml_function_coverage=1 00:18:08.993 --rc genhtml_legend=1 00:18:08.993 --rc geninfo_all_blocks=1 00:18:08.993 --rc geninfo_unexecuted_blocks=1 00:18:08.993 00:18:08.993 ' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.993 --rc genhtml_branch_coverage=1 00:18:08.993 --rc genhtml_function_coverage=1 00:18:08.993 --rc genhtml_legend=1 00:18:08.993 --rc geninfo_all_blocks=1 00:18:08.993 --rc geninfo_unexecuted_blocks=1 00:18:08.993 00:18:08.993 ' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.993 --rc genhtml_branch_coverage=1 00:18:08.993 --rc genhtml_function_coverage=1 00:18:08.993 --rc genhtml_legend=1 00:18:08.993 --rc geninfo_all_blocks=1 00:18:08.993 --rc geninfo_unexecuted_blocks=1 00:18:08.993 00:18:08.993 ' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.993 --rc genhtml_branch_coverage=1 00:18:08.993 --rc genhtml_function_coverage=1 00:18:08.993 --rc genhtml_legend=1 00:18:08.993 --rc geninfo_all_blocks=1 00:18:08.993 --rc geninfo_unexecuted_blocks=1 00:18:08.993 00:18:08.993 ' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=963617 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 963617' 00:18:08.993 Process pid: 963617 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 963617 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 963617 ']' 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.993 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.994 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.994 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 [2024-12-16 16:23:57.445693] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:08.994 [2024-12-16 16:23:57.445738] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.994 [2024-12-16 16:23:57.519830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.994 [2024-12-16 16:23:57.542919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.994 [2024-12-16 16:23:57.542956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.994 [2024-12-16 16:23:57.542963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.994 [2024-12-16 16:23:57.542970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.994 [2024-12-16 16:23:57.542975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.994 [2024-12-16 16:23:57.544309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.994 [2024-12-16 16:23:57.544419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.994 [2024-12-16 16:23:57.544525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.994 [2024-12-16 16:23:57.544527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.252 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.252 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:09.252 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:10.188 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:10.447 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:10.447 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:10.447 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:10.447 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:10.447 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:10.705 Malloc1 00:18:10.705 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:10.705 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:10.963 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:11.221 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:11.221 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:11.221 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:11.479 Malloc2 00:18:11.479 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:11.479 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:11.737 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:11.996 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:11.996 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:11.996 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:11.996 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:11.996 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:11.996 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:11.996 [2024-12-16 16:24:00.521535] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:11.996 [2024-12-16 16:24:00.521573] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964293 ] 00:18:11.997 [2024-12-16 16:24:00.562568] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:11.997 [2024-12-16 16:24:00.567961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:11.997 [2024-12-16 16:24:00.567980] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f773db1f000 00:18:11.997 [2024-12-16 16:24:00.568957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.569950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.570954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.571966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.572969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.573972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.574977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.575988] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.997 [2024-12-16 16:24:00.576996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:11.997 [2024-12-16 16:24:00.577005] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f773c829000 00:18:11.997 [2024-12-16 16:24:00.577922] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:11.997 [2024-12-16 16:24:00.587365] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:11.997 [2024-12-16 16:24:00.587388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:11.997 [2024-12-16 16:24:00.593082] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:11.997 [2024-12-16 16:24:00.593124] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:11.997 [2024-12-16 16:24:00.593206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:11.997 [2024-12-16 16:24:00.593224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:11.997 [2024-12-16 16:24:00.593229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:11.997 [2024-12-16 16:24:00.594080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:11.997 [2024-12-16 16:24:00.594089] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:11.997 [2024-12-16 16:24:00.594100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:11.997 [2024-12-16 16:24:00.595086] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:11.997 [2024-12-16 16:24:00.595098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:11.997 [2024-12-16 16:24:00.595105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:11.997 [2024-12-16 16:24:00.596089] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:11.997 [2024-12-16 16:24:00.596101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:11.997 [2024-12-16 16:24:00.597100] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:11.997 [2024-12-16 16:24:00.597108] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:11.997 [2024-12-16 16:24:00.597113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:11.997 [2024-12-16 16:24:00.597119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:11.997 [2024-12-16 16:24:00.597227] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:11.997 [2024-12-16 16:24:00.597234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:11.997 [2024-12-16 16:24:00.597239] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:11.997 [2024-12-16 16:24:00.598105] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:11.997 [2024-12-16 16:24:00.599107] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:11.997 [2024-12-16 16:24:00.600114] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:11.997 [2024-12-16 16:24:00.601115] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:11.997 [2024-12-16 16:24:00.601181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:11.997 [2024-12-16 16:24:00.602127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:11.997 [2024-12-16 16:24:00.602135] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:11.997 [2024-12-16 16:24:00.602140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:11.997 [2024-12-16 16:24:00.602163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:11.997 [2024-12-16 16:24:00.602181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.997 [2024-12-16 16:24:00.602185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.997 [2024-12-16 16:24:00.602197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.997 [2024-12-16 16:24:00.602235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:11.997 [2024-12-16 16:24:00.602244] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:11.997 [2024-12-16 16:24:00.602249] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:11.997 [2024-12-16 16:24:00.602253] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:11.997 [2024-12-16 16:24:00.602257] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:11.997 [2024-12-16 16:24:00.602262] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:11.997 [2024-12-16 16:24:00.602266] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:11.997 [2024-12-16 16:24:00.602270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:11.997 [2024-12-16 16:24:00.602303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:11.997 [2024-12-16 16:24:00.602313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.997 [2024-12-16 16:24:00.602321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.997 [2024-12-16 16:24:00.602328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.997 [2024-12-16 16:24:00.602335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.997 [2024-12-16 16:24:00.602340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:11.997 [2024-12-16 16:24:00.602364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:11.997 [2024-12-16 16:24:00.602369] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:11.997 [2024-12-16 16:24:00.602374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:11.997 [2024-12-16 16:24:00.602402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:11.997 [2024-12-16 16:24:00.602449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:11.997 [2024-12-16 16:24:00.602466] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:11.997 [2024-12-16 16:24:00.602470] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:11.997 [2024-12-16 16:24:00.602473] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.997 [2024-12-16 16:24:00.602479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602500] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:11.998 [2024-12-16 16:24:00.602508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602523] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:11.998 [2024-12-16 16:24:00.602527] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.998 [2024-12-16 16:24:00.602530] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.998 [2024-12-16 16:24:00.602535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602581] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:11.998 [2024-12-16 16:24:00.602585] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.998 [2024-12-16 16:24:00.602588] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.998 [2024-12-16 16:24:00.602593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602645] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:11.998 [2024-12-16 16:24:00.602649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:11.998 [2024-12-16 16:24:00.602654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:11.998 [2024-12-16 16:24:00.602670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602752] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:11.998 [2024-12-16 16:24:00.602756] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:11.998 [2024-12-16 16:24:00.602759] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:11.998 [2024-12-16 16:24:00.602762] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:11.998 [2024-12-16 16:24:00.602765] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:11.998 [2024-12-16 16:24:00.602771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:11.998 [2024-12-16 16:24:00.602777] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:11.998 [2024-12-16 16:24:00.602781] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:11.998 [2024-12-16 16:24:00.602784] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.998 [2024-12-16 16:24:00.602789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602795] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:11.998 [2024-12-16 16:24:00.602799] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.998 [2024-12-16 16:24:00.602802] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.998 [2024-12-16 16:24:00.602807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602813] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:11.998 [2024-12-16 16:24:00.602817] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:11.998 [2024-12-16 16:24:00.602820] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.998 [2024-12-16 16:24:00.602825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:11.998 [2024-12-16 16:24:00.602831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:11.998 [2024-12-16 16:24:00.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:11.998 ===================================================== 00:18:11.998 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:11.998 ===================================================== 00:18:11.998 Controller Capabilities/Features 00:18:11.998 ================================ 00:18:11.998 Vendor ID: 4e58 00:18:11.998 Subsystem Vendor ID: 4e58 00:18:11.998 Serial Number: SPDK1 00:18:11.998 Model Number: SPDK bdev Controller 00:18:11.998 Firmware Version: 25.01 00:18:11.998 Recommended Arb Burst: 6 00:18:11.998 IEEE OUI Identifier: 8d 6b 50 00:18:11.998 Multi-path I/O 00:18:11.998 May have multiple subsystem ports: Yes 00:18:11.998 May have multiple controllers: Yes 00:18:11.998 Associated with SR-IOV VF: No 00:18:11.998 Max Data Transfer Size: 131072 00:18:11.998 Max Number of Namespaces: 32 00:18:11.998 Max Number of I/O Queues: 127 00:18:11.998 NVMe Specification Version (VS): 1.3 00:18:11.998 NVMe Specification Version (Identify): 1.3 00:18:11.998 Maximum Queue Entries: 256 00:18:11.998 Contiguous Queues Required: Yes 00:18:11.998 Arbitration Mechanisms Supported 00:18:11.998 Weighted Round Robin: Not Supported 00:18:11.998 Vendor Specific: Not Supported 00:18:11.998 Reset Timeout: 15000 ms 00:18:11.998 Doorbell Stride: 4 bytes 00:18:11.998 NVM Subsystem Reset: Not Supported 00:18:11.998 Command Sets Supported 00:18:11.998 NVM Command Set: Supported 00:18:11.998 Boot Partition: Not Supported 00:18:11.998 Memory Page Size Minimum: 4096 bytes 00:18:11.998 Memory Page Size Maximum: 4096 bytes 00:18:11.998 Persistent Memory Region: Not Supported 00:18:11.998 Optional Asynchronous Events Supported 00:18:11.998 Namespace Attribute Notices: Supported 00:18:11.998 Firmware Activation Notices: Not Supported 00:18:11.998 ANA Change Notices: Not Supported 00:18:11.998 PLE Aggregate Log Change Notices: Not Supported 00:18:11.998 LBA Status Info Alert Notices: Not Supported 00:18:11.998 EGE Aggregate Log Change Notices: Not Supported 00:18:11.998 Normal NVM Subsystem Shutdown event: Not Supported 00:18:11.998 Zone Descriptor Change Notices: Not Supported 00:18:11.998 Discovery Log Change Notices: Not Supported 00:18:11.998 Controller Attributes 00:18:11.998 128-bit Host Identifier: Supported 00:18:11.998 Non-Operational Permissive Mode: Not Supported 00:18:11.998 NVM Sets: Not Supported 00:18:11.998 Read Recovery Levels: Not Supported 00:18:11.998 Endurance Groups: Not Supported 00:18:11.998 Predictable Latency Mode: Not Supported 00:18:11.998 Traffic Based Keep ALive: Not Supported 00:18:11.998 Namespace Granularity: Not Supported 00:18:11.998 SQ Associations: Not Supported 00:18:11.998 UUID List: Not Supported 00:18:11.998 Multi-Domain Subsystem: Not Supported 00:18:11.998 Fixed Capacity Management: Not Supported 00:18:11.998 Variable Capacity Management: Not Supported 00:18:11.998 Delete Endurance Group: Not Supported 00:18:11.998 Delete NVM Set: Not Supported 00:18:11.998 Extended LBA Formats Supported: Not Supported 00:18:11.998 Flexible Data Placement Supported: Not Supported 00:18:11.998 00:18:11.998 Controller Memory Buffer Support 00:18:11.998 ================================ 00:18:11.998 Supported: No 00:18:11.998 00:18:11.998 Persistent Memory Region Support 00:18:11.998 ================================ 00:18:11.998 Supported: No 00:18:11.998 00:18:11.998 Admin Command Set Attributes 00:18:11.998 ============================ 00:18:11.999 Security Send/Receive: Not Supported 00:18:11.999 Format NVM: Not Supported 00:18:11.999 Firmware Activate/Download: Not Supported 00:18:11.999 Namespace Management: Not Supported 00:18:11.999 Device Self-Test: Not Supported 00:18:11.999 Directives: Not Supported 00:18:11.999 NVMe-MI: Not Supported 00:18:11.999 Virtualization Management: Not Supported 00:18:11.999 Doorbell Buffer Config: Not Supported 00:18:11.999 Get LBA Status Capability: Not Supported 00:18:11.999 Command & Feature Lockdown Capability: Not Supported 00:18:11.999 Abort Command Limit: 4 00:18:11.999 Async Event Request Limit: 4 00:18:11.999 Number of Firmware Slots: N/A 00:18:11.999 Firmware Slot 1 Read-Only: N/A 00:18:11.999 Firmware Activation Without Reset: N/A 00:18:11.999 Multiple Update Detection Support: N/A 00:18:11.999 Firmware Update Granularity: No Information Provided 00:18:11.999 Per-Namespace SMART Log: No 00:18:11.999 Asymmetric Namespace Access Log Page: Not Supported 00:18:11.999 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:11.999 Command Effects Log Page: Supported 00:18:11.999 Get Log Page Extended Data: Supported 00:18:11.999 Telemetry Log Pages: Not Supported 00:18:11.999 Persistent Event Log Pages: Not Supported 00:18:11.999 Supported Log Pages Log Page: May Support 00:18:11.999 Commands Supported & Effects Log Page: Not Supported 00:18:11.999 Feature Identifiers & Effects Log Page:May Support 00:18:11.999 NVMe-MI Commands & Effects Log Page: May Support 00:18:11.999 Data Area 4 for Telemetry Log: Not Supported 00:18:11.999 Error Log Page Entries Supported: 128 00:18:11.999 Keep Alive: Supported 00:18:11.999 Keep Alive Granularity: 10000 ms 00:18:11.999 00:18:11.999 NVM Command Set Attributes 00:18:11.999 ========================== 00:18:11.999 Submission Queue Entry Size 00:18:11.999 Max: 64 00:18:11.999 Min: 64 00:18:11.999 Completion Queue Entry Size 00:18:11.999 Max: 16 00:18:11.999 Min: 16 00:18:11.999 Number of Namespaces: 32 00:18:11.999 Compare Command: Supported 00:18:11.999 Write Uncorrectable Command: Not Supported 00:18:11.999 Dataset Management Command: Supported 00:18:11.999 Write Zeroes Command: Supported 00:18:11.999 Set Features Save Field: Not Supported 00:18:11.999 Reservations: Not Supported 00:18:11.999 Timestamp: Not Supported 00:18:11.999 Copy: Supported 00:18:11.999 Volatile Write Cache: Present 00:18:11.999 Atomic Write Unit (Normal): 1 00:18:11.999 Atomic Write Unit (PFail): 1 00:18:11.999 Atomic Compare & Write Unit: 1 00:18:11.999 Fused Compare & Write: Supported 00:18:11.999 Scatter-Gather List 00:18:11.999 SGL Command Set: Supported (Dword aligned) 00:18:11.999 SGL Keyed: Not Supported 00:18:11.999 SGL Bit Bucket Descriptor: Not Supported 00:18:11.999 SGL Metadata Pointer: Not Supported 00:18:11.999 Oversized SGL: Not Supported 00:18:11.999 SGL Metadata Address: Not Supported 00:18:11.999 SGL Offset: Not Supported 00:18:11.999 Transport SGL Data Block: Not Supported 00:18:11.999 Replay Protected Memory Block: Not Supported 00:18:11.999 00:18:11.999 Firmware Slot Information 00:18:11.999 ========================= 00:18:11.999 Active slot: 1 00:18:11.999 Slot 1 Firmware Revision: 25.01 00:18:11.999 00:18:11.999 00:18:11.999 Commands Supported and Effects 00:18:11.999 ============================== 00:18:11.999 Admin Commands 00:18:11.999 -------------- 00:18:11.999 Get Log Page (02h): Supported 00:18:11.999 Identify (06h): Supported 00:18:11.999 Abort (08h): Supported 00:18:11.999 Set Features (09h): Supported 00:18:11.999 Get Features (0Ah): Supported 00:18:11.999 Asynchronous Event Request (0Ch): Supported 00:18:11.999 Keep Alive (18h): Supported 00:18:11.999 I/O Commands 00:18:11.999 ------------ 00:18:11.999 Flush (00h): Supported LBA-Change 00:18:11.999 Write (01h): Supported LBA-Change 00:18:11.999 Read (02h): Supported 00:18:11.999 Compare (05h): Supported 00:18:11.999 Write Zeroes (08h): Supported LBA-Change 00:18:11.999 Dataset Management (09h): Supported LBA-Change 00:18:11.999 Copy (19h): Supported LBA-Change 00:18:11.999 00:18:11.999 Error Log 00:18:11.999 ========= 00:18:11.999 00:18:11.999 Arbitration 00:18:11.999 =========== 00:18:11.999 Arbitration Burst: 1 00:18:11.999 00:18:11.999 Power Management 00:18:11.999 ================ 00:18:11.999 Number of Power States: 1 00:18:11.999 Current Power State: Power State #0 00:18:11.999 Power State #0: 00:18:11.999 Max Power: 0.00 W 00:18:11.999 Non-Operational State: Operational 00:18:11.999 Entry Latency: Not Reported 00:18:11.999 Exit Latency: Not Reported 00:18:11.999 Relative Read Throughput: 0 00:18:11.999 Relative Read Latency: 0 00:18:11.999 Relative Write Throughput: 0 00:18:11.999 Relative Write Latency: 0 00:18:11.999 Idle Power: Not Reported 00:18:11.999 Active Power: Not Reported 00:18:11.999 Non-Operational Permissive Mode: Not Supported 00:18:11.999 00:18:11.999 Health Information 00:18:11.999 ================== 00:18:11.999 Critical Warnings: 00:18:11.999 Available Spare Space: OK 00:18:11.999 Temperature: OK 00:18:11.999 Device Reliability: OK 00:18:11.999 Read Only: No 00:18:11.999 Volatile Memory Backup: OK 00:18:11.999 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:11.999 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:11.999 Available Spare: 0% 00:18:11.999 Available Sp[2024-12-16 16:24:00.602950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:11.999 [2024-12-16 16:24:00.602960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:11.999 [2024-12-16 16:24:00.602986] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:11.999 [2024-12-16 16:24:00.602997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.999 [2024-12-16 16:24:00.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.999 [2024-12-16 16:24:00.603008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.999 [2024-12-16 16:24:00.603013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.258 [2024-12-16 16:24:00.607102] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:12.258 [2024-12-16 16:24:00.607113] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:12.258 [2024-12-16 16:24:00.607159] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:12.258 [2024-12-16 16:24:00.607205] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:12.258 [2024-12-16 16:24:00.607211] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:12.258 [2024-12-16 16:24:00.608163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:12.258 [2024-12-16 16:24:00.608173] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:12.258 [2024-12-16 16:24:00.608227] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:12.258 [2024-12-16 16:24:00.609185] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:12.258 are Threshold: 0% 00:18:12.258 Life Percentage Used: 0% 00:18:12.258 Data Units Read: 0 00:18:12.258 Data Units Written: 0 00:18:12.258 Host Read Commands: 0 00:18:12.258 Host Write Commands: 0 00:18:12.258 Controller Busy Time: 0 minutes 00:18:12.258 Power Cycles: 0 00:18:12.258 Power On Hours: 0 hours 00:18:12.258 Unsafe Shutdowns: 0 00:18:12.258 Unrecoverable Media Errors: 0 00:18:12.258 Lifetime Error Log Entries: 0 00:18:12.258 Warning Temperature Time: 0 minutes 00:18:12.258 Critical Temperature Time: 0 minutes 00:18:12.258 00:18:12.258 Number of Queues 00:18:12.258 ================ 00:18:12.258 Number of I/O Submission Queues: 127 00:18:12.258 Number of I/O Completion Queues: 127 00:18:12.258 00:18:12.258 Active Namespaces 00:18:12.258 ================= 00:18:12.258 Namespace ID:1 00:18:12.258 Error Recovery Timeout: Unlimited 00:18:12.258 Command Set Identifier: NVM (00h) 00:18:12.258 Deallocate: Supported 00:18:12.258 Deallocated/Unwritten Error: Not Supported 00:18:12.258 Deallocated Read Value: Unknown 00:18:12.258 Deallocate in Write Zeroes: Not Supported 00:18:12.258 Deallocated Guard Field: 0xFFFF 00:18:12.258 Flush: Supported 00:18:12.258 Reservation: Supported 00:18:12.258 Namespace Sharing Capabilities: Multiple Controllers 00:18:12.258 Size (in LBAs): 131072 (0GiB) 00:18:12.258 Capacity (in LBAs): 131072 (0GiB) 00:18:12.258 Utilization (in LBAs): 131072 (0GiB) 00:18:12.258 NGUID: 51A10F313E25433AA579142750727A65 00:18:12.258 UUID: 51a10f31-3e25-433a-a579-142750727a65 00:18:12.258 Thin Provisioning: Not Supported 00:18:12.258 Per-NS Atomic Units: Yes 00:18:12.258 Atomic Boundary Size (Normal): 0 00:18:12.258 Atomic Boundary Size (PFail): 0 00:18:12.258 Atomic Boundary Offset: 0 00:18:12.258 Maximum Single Source Range Length: 65535 00:18:12.258 Maximum Copy Length: 65535 00:18:12.258 Maximum Source Range Count: 1 00:18:12.258 NGUID/EUI64 Never Reused: No 00:18:12.258 Namespace Write Protected: No 00:18:12.258 Number of LBA Formats: 1 00:18:12.258 Current LBA Format: LBA Format #00 00:18:12.258 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:12.258 00:18:12.258 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:12.258 [2024-12-16 16:24:00.837187] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.527 Initializing NVMe Controllers 00:18:17.527 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:17.527 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:17.527 Initialization complete. Launching workers. 00:18:17.527 ======================================================== 00:18:17.527 Latency(us) 00:18:17.527 Device Information : IOPS MiB/s Average min max 00:18:17.527 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39929.60 155.98 3205.23 977.47 8120.75 00:18:17.527 ======================================================== 00:18:17.527 Total : 39929.60 155.98 3205.23 977.47 8120.75 00:18:17.527 00:18:17.527 [2024-12-16 16:24:05.854683] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.527 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:17.527 [2024-12-16 16:24:06.087747] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.797 Initializing NVMe Controllers 00:18:22.797 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:22.797 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:22.797 Initialization complete. Launching workers. 00:18:22.797 ======================================================== 00:18:22.797 Latency(us) 00:18:22.797 Device Information : IOPS MiB/s Average min max 00:18:22.797 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7970.15 4970.10 9980.58 00:18:22.797 ======================================================== 00:18:22.797 Total : 16076.80 62.80 7970.15 4970.10 9980.58 00:18:22.797 00:18:22.797 [2024-12-16 16:24:11.124487] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.797 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:22.797 [2024-12-16 16:24:11.331477] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:28.072 [2024-12-16 16:24:16.399365] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:28.072 Initializing NVMe Controllers 00:18:28.072 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:28.072 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:28.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:28.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:28.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:28.072 Initialization complete. Launching workers. 00:18:28.072 Starting thread on core 2 00:18:28.072 Starting thread on core 3 00:18:28.072 Starting thread on core 1 00:18:28.072 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:28.331 [2024-12-16 16:24:16.691511] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.773 [2024-12-16 16:24:19.742023] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.773 Initializing NVMe Controllers 00:18:31.773 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.773 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.773 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:31.773 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:31.773 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:31.773 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:31.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:31.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:31.773 Initialization complete. Launching workers. 00:18:31.773 Starting thread on core 1 with urgent priority queue 00:18:31.773 Starting thread on core 2 with urgent priority queue 00:18:31.773 Starting thread on core 3 with urgent priority queue 00:18:31.773 Starting thread on core 0 with urgent priority queue 00:18:31.773 SPDK bdev Controller (SPDK1 ) core 0: 7659.00 IO/s 13.06 secs/100000 ios 00:18:31.773 SPDK bdev Controller (SPDK1 ) core 1: 8683.67 IO/s 11.52 secs/100000 ios 00:18:31.773 SPDK bdev Controller (SPDK1 ) core 2: 8347.67 IO/s 11.98 secs/100000 ios 00:18:31.773 SPDK bdev Controller (SPDK1 ) core 3: 10434.67 IO/s 9.58 secs/100000 ios 00:18:31.773 ======================================================== 00:18:31.773 00:18:31.773 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:31.773 [2024-12-16 16:24:20.032249] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.773 Initializing NVMe Controllers 00:18:31.773 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.773 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.773 Namespace ID: 1 size: 0GB 00:18:31.773 Initialization complete. 00:18:31.773 INFO: using host memory buffer for IO 00:18:31.773 Hello world! 00:18:31.773 [2024-12-16 16:24:20.067501] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.773 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:31.773 [2024-12-16 16:24:20.354503] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:33.151 Initializing NVMe Controllers 00:18:33.151 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:33.151 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:33.151 Initialization complete. Launching workers. 00:18:33.151 submit (in ns) avg, min, max = 5812.6, 3133.3, 3999519.0 00:18:33.151 complete (in ns) avg, min, max = 22782.9, 1719.0, 6989485.7 00:18:33.151 00:18:33.151 Submit histogram 00:18:33.151 ================ 00:18:33.151 Range in us Cumulative Count 00:18:33.151 3.124 - 3.139: 0.0061% ( 1) 00:18:33.151 3.139 - 3.154: 0.0183% ( 2) 00:18:33.151 3.154 - 3.170: 0.0730% ( 9) 00:18:33.151 3.170 - 3.185: 0.1278% ( 9) 00:18:33.151 3.185 - 3.200: 0.1765% ( 8) 00:18:33.151 3.200 - 3.215: 0.6329% ( 75) 00:18:33.151 3.215 - 3.230: 2.6109% ( 325) 00:18:33.151 3.230 - 3.246: 7.6258% ( 824) 00:18:33.151 3.246 - 3.261: 13.1581% ( 909) 00:18:33.151 3.261 - 3.276: 19.8831% ( 1105) 00:18:33.151 3.276 - 3.291: 27.2229% ( 1206) 00:18:33.151 3.291 - 3.307: 33.3455% ( 1006) 00:18:33.151 3.307 - 3.322: 38.3969% ( 830) 00:18:33.151 3.322 - 3.337: 43.6127% ( 857) 00:18:33.151 3.337 - 3.352: 48.9684% ( 880) 00:18:33.151 3.352 - 3.368: 52.8696% ( 641) 00:18:33.151 3.368 - 3.383: 58.0610% ( 853) 00:18:33.151 3.383 - 3.398: 64.9382% ( 1130) 00:18:33.151 3.398 - 3.413: 70.1175% ( 851) 00:18:33.151 3.413 - 3.429: 76.1183% ( 986) 00:18:33.151 3.429 - 3.444: 81.0358% ( 808) 00:18:33.151 3.444 - 3.459: 84.0302% ( 492) 00:18:33.151 3.459 - 3.474: 85.9777% ( 320) 00:18:33.151 3.474 - 3.490: 87.2497% ( 209) 00:18:33.151 3.490 - 3.505: 87.9861% ( 121) 00:18:33.151 3.505 - 3.520: 88.6069% ( 102) 00:18:33.151 3.520 - 3.535: 89.3859% ( 128) 00:18:33.151 3.535 - 3.550: 90.2075% ( 135) 00:18:33.151 3.550 - 3.566: 91.1509% ( 155) 00:18:33.151 3.566 - 3.581: 92.1855% ( 170) 00:18:33.151 3.581 - 3.596: 93.0315% ( 139) 00:18:33.151 3.596 - 3.611: 93.8409% ( 133) 00:18:33.151 3.611 - 3.627: 94.6808% ( 138) 00:18:33.151 3.627 - 3.642: 95.4233% ( 122) 00:18:33.151 3.642 - 3.657: 96.2145% ( 130) 00:18:33.151 3.657 - 3.672: 96.9570% ( 122) 00:18:33.151 3.672 - 3.688: 97.6508% ( 114) 00:18:33.151 3.688 - 3.703: 98.1742% ( 86) 00:18:33.151 3.703 - 3.718: 98.5150% ( 56) 00:18:33.151 3.718 - 3.733: 98.7524% ( 39) 00:18:33.151 3.733 - 3.749: 99.0262% ( 45) 00:18:33.151 3.749 - 3.764: 99.2575% ( 38) 00:18:33.151 3.764 - 3.779: 99.4218% ( 27) 00:18:33.151 3.779 - 3.794: 99.5618% ( 23) 00:18:33.151 3.794 - 3.810: 99.6348% ( 12) 00:18:33.151 3.810 - 3.825: 99.6592% ( 4) 00:18:33.151 3.825 - 3.840: 99.6774% ( 3) 00:18:33.151 3.840 - 3.855: 99.6896% ( 2) 00:18:33.151 3.855 - 3.870: 99.7018% ( 2) 00:18:33.151 3.992 - 4.023: 99.7079% ( 1) 00:18:33.151 4.114 - 4.145: 99.7140% ( 1) 00:18:33.151 5.425 - 5.455: 99.7200% ( 1) 00:18:33.151 5.608 - 5.638: 99.7261% ( 1) 00:18:33.151 5.790 - 5.821: 99.7322% ( 1) 00:18:33.151 5.851 - 5.882: 99.7383% ( 1) 00:18:33.151 5.882 - 5.912: 99.7444% ( 1) 00:18:33.151 5.912 - 5.943: 99.7566% ( 2) 00:18:33.151 5.943 - 5.973: 99.7626% ( 1) 00:18:33.151 5.973 - 6.004: 99.7687% ( 1) 00:18:33.151 6.004 - 6.034: 99.7748% ( 1) 00:18:33.151 6.034 - 6.065: 99.7809% ( 1) 00:18:33.151 6.065 - 6.095: 99.7870% ( 1) 00:18:33.151 6.095 - 6.126: 99.7992% ( 2) 00:18:33.151 6.187 - 6.217: 99.8052% ( 1) 00:18:33.151 6.217 - 6.248: 99.8113% ( 1) 00:18:33.151 6.248 - 6.278: 99.8174% ( 1) 00:18:33.152 6.278 - 6.309: 99.8357% ( 3) 00:18:33.152 6.400 - 6.430: 99.8418% ( 1) 00:18:33.152 6.430 - 6.461: 99.8478% ( 1) 00:18:33.152 6.644 - 6.674: 99.8539% ( 1) 00:18:33.152 6.827 - 6.857: 99.8661% ( 2) 00:18:33.152 6.857 - 6.888: 99.8722% ( 1) 00:18:33.152 6.888 - 6.918: 99.8844% ( 2) 00:18:33.152 6.979 - 7.010: 99.8905% ( 1) 00:18:33.152 7.040 - 7.070: 99.8965% ( 1) 00:18:33.152 7.131 - 7.162: 99.9026% ( 1) 00:18:33.152 7.253 - 7.284: 99.9087% ( 1) 00:18:33.152 7.284 - 7.314: 99.9148% ( 1) 00:18:33.152 7.467 - 7.497: 99.9209% ( 1) 00:18:33.152 [2024-12-16 16:24:21.369661] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:33.152 7.497 - 7.528: 99.9270% ( 1) 00:18:33.152 7.619 - 7.650: 99.9331% ( 1) 00:18:33.152 10.545 - 10.606: 99.9391% ( 1) 00:18:33.152 3994.575 - 4025.783: 100.0000% ( 10) 00:18:33.152 00:18:33.152 Complete histogram 00:18:33.152 ================== 00:18:33.152 Range in us Cumulative Count 00:18:33.152 1.714 - 1.722: 0.0061% ( 1) 00:18:33.152 1.722 - 1.730: 0.0122% ( 1) 00:18:33.152 1.730 - 1.737: 0.1339% ( 20) 00:18:33.152 1.737 - 1.745: 0.3895% ( 42) 00:18:33.152 1.745 - 1.752: 0.5356% ( 24) 00:18:33.152 1.752 - 1.760: 0.5660% ( 5) 00:18:33.152 1.760 - 1.768: 0.6573% ( 15) 00:18:33.152 1.768 - 1.775: 1.6737% ( 167) 00:18:33.152 1.775 - 1.783: 12.1113% ( 1715) 00:18:33.152 1.783 - 1.790: 38.8716% ( 4397) 00:18:33.152 1.790 - 1.798: 64.6644% ( 4238) 00:18:33.152 1.798 - 1.806: 74.9011% ( 1682) 00:18:33.152 1.806 - 1.813: 78.3215% ( 562) 00:18:33.152 1.813 - 1.821: 80.6463% ( 382) 00:18:33.152 1.821 - 1.829: 81.9670% ( 217) 00:18:33.152 1.829 - 1.836: 84.2919% ( 382) 00:18:33.152 1.836 - 1.844: 88.4852% ( 689) 00:18:33.152 1.844 - 1.851: 92.1064% ( 595) 00:18:33.152 1.851 - 1.859: 94.6382% ( 416) 00:18:33.152 1.859 - 1.867: 96.3240% ( 277) 00:18:33.152 1.867 - 1.874: 97.3830% ( 174) 00:18:33.152 1.874 - 1.882: 98.0951% ( 117) 00:18:33.152 1.882 - 1.890: 98.4176% ( 53) 00:18:33.152 1.890 - 1.897: 98.6002% ( 30) 00:18:33.152 1.897 - 1.905: 98.7524% ( 25) 00:18:33.152 1.905 - 1.912: 98.9045% ( 25) 00:18:33.152 1.912 - 1.920: 99.0080% ( 17) 00:18:33.152 1.920 - 1.928: 99.0810% ( 12) 00:18:33.152 1.928 - 1.935: 99.1419% ( 10) 00:18:33.152 1.935 - 1.943: 99.1662% ( 4) 00:18:33.152 1.943 - 1.950: 99.2088% ( 7) 00:18:33.152 1.950 - 1.966: 99.2392% ( 5) 00:18:33.152 1.966 - 1.981: 99.2575% ( 3) 00:18:33.152 1.981 - 1.996: 99.2636% ( 1) 00:18:33.152 1.996 - 2.011: 99.2697% ( 1) 00:18:33.152 3.870 - 3.886: 99.2818% ( 2) 00:18:33.152 3.931 - 3.962: 99.2879% ( 1) 00:18:33.152 4.145 - 4.175: 99.3001% ( 2) 00:18:33.152 4.175 - 4.206: 99.3062% ( 1) 00:18:33.152 4.236 - 4.267: 99.3184% ( 2) 00:18:33.152 4.267 - 4.297: 99.3244% ( 1) 00:18:33.152 4.328 - 4.358: 99.3305% ( 1) 00:18:33.152 4.450 - 4.480: 99.3366% ( 1) 00:18:33.152 4.541 - 4.571: 99.3427% ( 1) 00:18:33.152 4.571 - 4.602: 99.3488% ( 1) 00:18:33.152 4.632 - 4.663: 99.3549% ( 1) 00:18:33.152 4.663 - 4.693: 99.3610% ( 1) 00:18:33.152 4.724 - 4.754: 99.3731% ( 2) 00:18:33.152 4.815 - 4.846: 99.3792% ( 1) 00:18:33.152 4.876 - 4.907: 99.3853% ( 1) 00:18:33.152 4.998 - 5.029: 99.3914% ( 1) 00:18:33.152 5.090 - 5.120: 99.3975% ( 1) 00:18:33.152 5.150 - 5.181: 99.4036% ( 1) 00:18:33.152 5.242 - 5.272: 99.4097% ( 1) 00:18:33.152 5.272 - 5.303: 99.4157% ( 1) 00:18:33.152 5.394 - 5.425: 99.4218% ( 1) 00:18:33.152 6.034 - 6.065: 99.4279% ( 1) 00:18:33.152 6.430 - 6.461: 99.4340% ( 1) 00:18:33.152 6.522 - 6.552: 99.4401% ( 1) 00:18:33.152 6.888 - 6.918: 99.4462% ( 1) 00:18:33.152 12.008 - 12.069: 99.4523% ( 1) 00:18:33.152 12.130 - 12.190: 99.4583% ( 1) 00:18:33.152 12.251 - 12.312: 99.4644% ( 1) 00:18:33.152 17.432 - 17.554: 99.4705% ( 1) 00:18:33.152 38.278 - 38.522: 99.4766% ( 1) 00:18:33.152 1037.653 - 1045.455: 99.4827% ( 1) 00:18:33.152 3994.575 - 4025.783: 99.9878% ( 83) 00:18:33.152 4993.219 - 5024.427: 99.9939% ( 1) 00:18:33.152 6959.299 - 6990.507: 100.0000% ( 1) 00:18:33.152 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:33.152 [ 00:18:33.152 { 00:18:33.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:33.152 "subtype": "Discovery", 00:18:33.152 "listen_addresses": [], 00:18:33.152 "allow_any_host": true, 00:18:33.152 "hosts": [] 00:18:33.152 }, 00:18:33.152 { 00:18:33.152 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:33.152 "subtype": "NVMe", 00:18:33.152 "listen_addresses": [ 00:18:33.152 { 00:18:33.152 "trtype": "VFIOUSER", 00:18:33.152 "adrfam": "IPv4", 00:18:33.152 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:33.152 "trsvcid": "0" 00:18:33.152 } 00:18:33.152 ], 00:18:33.152 "allow_any_host": true, 00:18:33.152 "hosts": [], 00:18:33.152 "serial_number": "SPDK1", 00:18:33.152 "model_number": "SPDK bdev Controller", 00:18:33.152 "max_namespaces": 32, 00:18:33.152 "min_cntlid": 1, 00:18:33.152 "max_cntlid": 65519, 00:18:33.152 "namespaces": [ 00:18:33.152 { 00:18:33.152 "nsid": 1, 00:18:33.152 "bdev_name": "Malloc1", 00:18:33.152 "name": "Malloc1", 00:18:33.152 "nguid": "51A10F313E25433AA579142750727A65", 00:18:33.152 "uuid": "51a10f31-3e25-433a-a579-142750727a65" 00:18:33.152 } 00:18:33.152 ] 00:18:33.152 }, 00:18:33.152 { 00:18:33.152 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:33.152 "subtype": "NVMe", 00:18:33.152 "listen_addresses": [ 00:18:33.152 { 00:18:33.152 "trtype": "VFIOUSER", 00:18:33.152 "adrfam": "IPv4", 00:18:33.152 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:33.152 "trsvcid": "0" 00:18:33.152 } 00:18:33.152 ], 00:18:33.152 "allow_any_host": true, 00:18:33.152 "hosts": [], 00:18:33.152 "serial_number": "SPDK2", 00:18:33.152 "model_number": "SPDK bdev Controller", 00:18:33.152 "max_namespaces": 32, 00:18:33.152 "min_cntlid": 1, 00:18:33.152 "max_cntlid": 65519, 00:18:33.152 "namespaces": [ 00:18:33.152 { 00:18:33.152 "nsid": 1, 00:18:33.152 "bdev_name": "Malloc2", 00:18:33.152 "name": "Malloc2", 00:18:33.152 "nguid": "BE7B9C5E893246BF8BD941A68A554733", 00:18:33.152 "uuid": "be7b9c5e-8932-46bf-8bd9-41a68a554733" 00:18:33.152 } 00:18:33.152 ] 00:18:33.152 } 00:18:33.152 ] 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=968176 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.152 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:33.153 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:33.153 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:33.412 [2024-12-16 16:24:21.768510] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:33.412 Malloc3 00:18:33.412 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:33.412 [2024-12-16 16:24:22.002219] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:33.671 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:33.671 Asynchronous Event Request test 00:18:33.671 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:33.671 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:33.671 Registering asynchronous event callbacks... 00:18:33.671 Starting namespace attribute notice tests for all controllers... 00:18:33.671 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:33.671 aer_cb - Changed Namespace 00:18:33.671 Cleaning up... 00:18:33.671 [ 00:18:33.671 { 00:18:33.671 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:33.671 "subtype": "Discovery", 00:18:33.671 "listen_addresses": [], 00:18:33.671 "allow_any_host": true, 00:18:33.671 "hosts": [] 00:18:33.671 }, 00:18:33.671 { 00:18:33.671 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:33.671 "subtype": "NVMe", 00:18:33.671 "listen_addresses": [ 00:18:33.671 { 00:18:33.671 "trtype": "VFIOUSER", 00:18:33.671 "adrfam": "IPv4", 00:18:33.671 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:33.671 "trsvcid": "0" 00:18:33.671 } 00:18:33.671 ], 00:18:33.671 "allow_any_host": true, 00:18:33.671 "hosts": [], 00:18:33.671 "serial_number": "SPDK1", 00:18:33.671 "model_number": "SPDK bdev Controller", 00:18:33.671 "max_namespaces": 32, 00:18:33.671 "min_cntlid": 1, 00:18:33.671 "max_cntlid": 65519, 00:18:33.671 "namespaces": [ 00:18:33.671 { 00:18:33.671 "nsid": 1, 00:18:33.671 "bdev_name": "Malloc1", 00:18:33.671 "name": "Malloc1", 00:18:33.671 "nguid": "51A10F313E25433AA579142750727A65", 00:18:33.671 "uuid": "51a10f31-3e25-433a-a579-142750727a65" 00:18:33.671 }, 00:18:33.671 { 00:18:33.671 "nsid": 2, 00:18:33.671 "bdev_name": "Malloc3", 00:18:33.671 "name": "Malloc3", 00:18:33.671 "nguid": "C84488F1DD1E4E158F56B4725CC34874", 00:18:33.671 "uuid": "c84488f1-dd1e-4e15-8f56-b4725cc34874" 00:18:33.671 } 00:18:33.671 ] 00:18:33.671 }, 00:18:33.671 { 00:18:33.671 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:33.671 "subtype": "NVMe", 00:18:33.671 "listen_addresses": [ 00:18:33.671 { 00:18:33.671 "trtype": "VFIOUSER", 00:18:33.671 "adrfam": "IPv4", 00:18:33.671 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:33.671 "trsvcid": "0" 00:18:33.671 } 00:18:33.671 ], 00:18:33.671 "allow_any_host": true, 00:18:33.671 "hosts": [], 00:18:33.671 "serial_number": "SPDK2", 00:18:33.671 "model_number": "SPDK bdev Controller", 00:18:33.671 "max_namespaces": 32, 00:18:33.671 "min_cntlid": 1, 00:18:33.671 "max_cntlid": 65519, 00:18:33.671 "namespaces": [ 00:18:33.671 { 00:18:33.671 "nsid": 1, 00:18:33.671 "bdev_name": "Malloc2", 00:18:33.671 "name": "Malloc2", 00:18:33.671 "nguid": "BE7B9C5E893246BF8BD941A68A554733", 00:18:33.671 "uuid": "be7b9c5e-8932-46bf-8bd9-41a68a554733" 00:18:33.671 } 00:18:33.671 ] 00:18:33.671 } 00:18:33.671 ] 00:18:33.671 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 968176 00:18:33.671 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:33.671 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:33.671 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:33.671 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:33.671 [2024-12-16 16:24:22.245502] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:33.671 [2024-12-16 16:24:22.245551] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968192 ] 00:18:33.932 [2024-12-16 16:24:22.286427] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:33.932 [2024-12-16 16:24:22.295337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:33.932 [2024-12-16 16:24:22.295358] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff1ae33f000 00:18:33.932 [2024-12-16 16:24:22.296337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.297338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.298349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.299363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.300372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.301375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.302387] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.303391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.932 [2024-12-16 16:24:22.304395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:33.932 [2024-12-16 16:24:22.304407] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff1ad049000 00:18:33.932 [2024-12-16 16:24:22.305324] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:33.932 [2024-12-16 16:24:22.314685] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:33.932 [2024-12-16 16:24:22.314709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:33.932 [2024-12-16 16:24:22.319793] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:33.932 [2024-12-16 16:24:22.319829] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:33.932 [2024-12-16 16:24:22.319902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:33.932 [2024-12-16 16:24:22.319916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:33.932 [2024-12-16 16:24:22.319922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:33.932 [2024-12-16 16:24:22.320798] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:33.932 [2024-12-16 16:24:22.320809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:33.932 [2024-12-16 16:24:22.320816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:33.932 [2024-12-16 16:24:22.321800] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:33.932 [2024-12-16 16:24:22.321809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:33.932 [2024-12-16 16:24:22.321815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:33.932 [2024-12-16 16:24:22.322807] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:33.932 [2024-12-16 16:24:22.322816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:33.932 [2024-12-16 16:24:22.323816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:33.932 [2024-12-16 16:24:22.323825] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:33.933 [2024-12-16 16:24:22.323830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:33.933 [2024-12-16 16:24:22.323836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:33.933 [2024-12-16 16:24:22.323944] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:33.933 [2024-12-16 16:24:22.323948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:33.933 [2024-12-16 16:24:22.323953] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:33.933 [2024-12-16 16:24:22.324826] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:33.933 [2024-12-16 16:24:22.325829] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:33.933 [2024-12-16 16:24:22.326834] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:33.933 [2024-12-16 16:24:22.327839] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:33.933 [2024-12-16 16:24:22.327877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:33.933 [2024-12-16 16:24:22.328846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:33.933 [2024-12-16 16:24:22.328856] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:33.933 [2024-12-16 16:24:22.328861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.328878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:33.933 [2024-12-16 16:24:22.328887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.328897] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:33.933 [2024-12-16 16:24:22.328902] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.933 [2024-12-16 16:24:22.328905] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.933 [2024-12-16 16:24:22.328917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.336103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.336114] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:33.933 [2024-12-16 16:24:22.336119] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:33.933 [2024-12-16 16:24:22.336123] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:33.933 [2024-12-16 16:24:22.336128] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:33.933 [2024-12-16 16:24:22.336132] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:33.933 [2024-12-16 16:24:22.336137] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:33.933 [2024-12-16 16:24:22.336141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.336150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.336162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.344101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.344113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.933 [2024-12-16 16:24:22.344121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.933 [2024-12-16 16:24:22.344131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.933 [2024-12-16 16:24:22.344138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.933 [2024-12-16 16:24:22.344143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.344150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.344159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.352102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.352110] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:33.933 [2024-12-16 16:24:22.352115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.352122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.352127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.352135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.357113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.359125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.359140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.359148] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:33.933 [2024-12-16 16:24:22.359152] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:33.933 [2024-12-16 16:24:22.359155] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.933 [2024-12-16 16:24:22.359162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.367101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.367112] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:33.933 [2024-12-16 16:24:22.367122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.367129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.367135] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:33.933 [2024-12-16 16:24:22.367139] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.933 [2024-12-16 16:24:22.367142] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.933 [2024-12-16 16:24:22.367148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.375100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.375115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.375123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.375130] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:33.933 [2024-12-16 16:24:22.375134] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.933 [2024-12-16 16:24:22.375137] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.933 [2024-12-16 16:24:22.375143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.933 [2024-12-16 16:24:22.383101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:33.933 [2024-12-16 16:24:22.383111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.383118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.383125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.383130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.383134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:33.933 [2024-12-16 16:24:22.383140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:33.934 [2024-12-16 16:24:22.383144] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:33.934 [2024-12-16 16:24:22.383148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:33.934 [2024-12-16 16:24:22.383153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:33.934 [2024-12-16 16:24:22.383169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.391101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.391114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.399100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.399113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.407102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.407114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.415102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.415120] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:33.934 [2024-12-16 16:24:22.415125] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:33.934 [2024-12-16 16:24:22.415128] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:33.934 [2024-12-16 16:24:22.415131] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:33.934 [2024-12-16 16:24:22.415134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:33.934 [2024-12-16 16:24:22.415141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:33.934 [2024-12-16 16:24:22.415147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:33.934 [2024-12-16 16:24:22.415151] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:33.934 [2024-12-16 16:24:22.415154] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.934 [2024-12-16 16:24:22.415159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.415165] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:33.934 [2024-12-16 16:24:22.415169] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.934 [2024-12-16 16:24:22.415172] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.934 [2024-12-16 16:24:22.415177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.415184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:33.934 [2024-12-16 16:24:22.415187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:33.934 [2024-12-16 16:24:22.415190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.934 [2024-12-16 16:24:22.415196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:33.934 [2024-12-16 16:24:22.423101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.423115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.423124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:33.934 [2024-12-16 16:24:22.423130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:33.934 ===================================================== 00:18:33.934 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:33.934 ===================================================== 00:18:33.934 Controller Capabilities/Features 00:18:33.934 ================================ 00:18:33.934 Vendor ID: 4e58 00:18:33.934 Subsystem Vendor ID: 4e58 00:18:33.934 Serial Number: SPDK2 00:18:33.934 Model Number: SPDK bdev Controller 00:18:33.934 Firmware Version: 25.01 00:18:33.934 Recommended Arb Burst: 6 00:18:33.934 IEEE OUI Identifier: 8d 6b 50 00:18:33.934 Multi-path I/O 00:18:33.934 May have multiple subsystem ports: Yes 00:18:33.934 May have multiple controllers: Yes 00:18:33.934 Associated with SR-IOV VF: No 00:18:33.934 Max Data Transfer Size: 131072 00:18:33.934 Max Number of Namespaces: 32 00:18:33.934 Max Number of I/O Queues: 127 00:18:33.934 NVMe Specification Version (VS): 1.3 00:18:33.934 NVMe Specification Version (Identify): 1.3 00:18:33.934 Maximum Queue Entries: 256 00:18:33.934 Contiguous Queues Required: Yes 00:18:33.934 Arbitration Mechanisms Supported 00:18:33.934 Weighted Round Robin: Not Supported 00:18:33.934 Vendor Specific: Not Supported 00:18:33.934 Reset Timeout: 15000 ms 00:18:33.934 Doorbell Stride: 4 bytes 00:18:33.934 NVM Subsystem Reset: Not Supported 00:18:33.934 Command Sets Supported 00:18:33.934 NVM Command Set: Supported 00:18:33.934 Boot Partition: Not Supported 00:18:33.934 Memory Page Size Minimum: 4096 bytes 00:18:33.934 Memory Page Size Maximum: 4096 bytes 00:18:33.934 Persistent Memory Region: Not Supported 00:18:33.934 Optional Asynchronous Events Supported 00:18:33.934 Namespace Attribute Notices: Supported 00:18:33.934 Firmware Activation Notices: Not Supported 00:18:33.934 ANA Change Notices: Not Supported 00:18:33.934 PLE Aggregate Log Change Notices: Not Supported 00:18:33.934 LBA Status Info Alert Notices: Not Supported 00:18:33.934 EGE Aggregate Log Change Notices: Not Supported 00:18:33.934 Normal NVM Subsystem Shutdown event: Not Supported 00:18:33.934 Zone Descriptor Change Notices: Not Supported 00:18:33.934 Discovery Log Change Notices: Not Supported 00:18:33.934 Controller Attributes 00:18:33.934 128-bit Host Identifier: Supported 00:18:33.934 Non-Operational Permissive Mode: Not Supported 00:18:33.934 NVM Sets: Not Supported 00:18:33.934 Read Recovery Levels: Not Supported 00:18:33.934 Endurance Groups: Not Supported 00:18:33.934 Predictable Latency Mode: Not Supported 00:18:33.934 Traffic Based Keep ALive: Not Supported 00:18:33.934 Namespace Granularity: Not Supported 00:18:33.934 SQ Associations: Not Supported 00:18:33.934 UUID List: Not Supported 00:18:33.934 Multi-Domain Subsystem: Not Supported 00:18:33.934 Fixed Capacity Management: Not Supported 00:18:33.934 Variable Capacity Management: Not Supported 00:18:33.934 Delete Endurance Group: Not Supported 00:18:33.934 Delete NVM Set: Not Supported 00:18:33.934 Extended LBA Formats Supported: Not Supported 00:18:33.934 Flexible Data Placement Supported: Not Supported 00:18:33.934 00:18:33.934 Controller Memory Buffer Support 00:18:33.934 ================================ 00:18:33.934 Supported: No 00:18:33.934 00:18:33.934 Persistent Memory Region Support 00:18:33.934 ================================ 00:18:33.934 Supported: No 00:18:33.934 00:18:33.934 Admin Command Set Attributes 00:18:33.934 ============================ 00:18:33.934 Security Send/Receive: Not Supported 00:18:33.934 Format NVM: Not Supported 00:18:33.934 Firmware Activate/Download: Not Supported 00:18:33.934 Namespace Management: Not Supported 00:18:33.934 Device Self-Test: Not Supported 00:18:33.934 Directives: Not Supported 00:18:33.934 NVMe-MI: Not Supported 00:18:33.934 Virtualization Management: Not Supported 00:18:33.934 Doorbell Buffer Config: Not Supported 00:18:33.934 Get LBA Status Capability: Not Supported 00:18:33.934 Command & Feature Lockdown Capability: Not Supported 00:18:33.934 Abort Command Limit: 4 00:18:33.934 Async Event Request Limit: 4 00:18:33.934 Number of Firmware Slots: N/A 00:18:33.934 Firmware Slot 1 Read-Only: N/A 00:18:33.934 Firmware Activation Without Reset: N/A 00:18:33.934 Multiple Update Detection Support: N/A 00:18:33.934 Firmware Update Granularity: No Information Provided 00:18:33.934 Per-Namespace SMART Log: No 00:18:33.934 Asymmetric Namespace Access Log Page: Not Supported 00:18:33.934 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:33.934 Command Effects Log Page: Supported 00:18:33.934 Get Log Page Extended Data: Supported 00:18:33.934 Telemetry Log Pages: Not Supported 00:18:33.934 Persistent Event Log Pages: Not Supported 00:18:33.934 Supported Log Pages Log Page: May Support 00:18:33.934 Commands Supported & Effects Log Page: Not Supported 00:18:33.934 Feature Identifiers & Effects Log Page:May Support 00:18:33.934 NVMe-MI Commands & Effects Log Page: May Support 00:18:33.934 Data Area 4 for Telemetry Log: Not Supported 00:18:33.934 Error Log Page Entries Supported: 128 00:18:33.934 Keep Alive: Supported 00:18:33.934 Keep Alive Granularity: 10000 ms 00:18:33.934 00:18:33.934 NVM Command Set Attributes 00:18:33.934 ========================== 00:18:33.934 Submission Queue Entry Size 00:18:33.934 Max: 64 00:18:33.934 Min: 64 00:18:33.934 Completion Queue Entry Size 00:18:33.934 Max: 16 00:18:33.934 Min: 16 00:18:33.934 Number of Namespaces: 32 00:18:33.934 Compare Command: Supported 00:18:33.934 Write Uncorrectable Command: Not Supported 00:18:33.934 Dataset Management Command: Supported 00:18:33.934 Write Zeroes Command: Supported 00:18:33.934 Set Features Save Field: Not Supported 00:18:33.934 Reservations: Not Supported 00:18:33.934 Timestamp: Not Supported 00:18:33.934 Copy: Supported 00:18:33.934 Volatile Write Cache: Present 00:18:33.934 Atomic Write Unit (Normal): 1 00:18:33.935 Atomic Write Unit (PFail): 1 00:18:33.935 Atomic Compare & Write Unit: 1 00:18:33.935 Fused Compare & Write: Supported 00:18:33.935 Scatter-Gather List 00:18:33.935 SGL Command Set: Supported (Dword aligned) 00:18:33.935 SGL Keyed: Not Supported 00:18:33.935 SGL Bit Bucket Descriptor: Not Supported 00:18:33.935 SGL Metadata Pointer: Not Supported 00:18:33.935 Oversized SGL: Not Supported 00:18:33.935 SGL Metadata Address: Not Supported 00:18:33.935 SGL Offset: Not Supported 00:18:33.935 Transport SGL Data Block: Not Supported 00:18:33.935 Replay Protected Memory Block: Not Supported 00:18:33.935 00:18:33.935 Firmware Slot Information 00:18:33.935 ========================= 00:18:33.935 Active slot: 1 00:18:33.935 Slot 1 Firmware Revision: 25.01 00:18:33.935 00:18:33.935 00:18:33.935 Commands Supported and Effects 00:18:33.935 ============================== 00:18:33.935 Admin Commands 00:18:33.935 -------------- 00:18:33.935 Get Log Page (02h): Supported 00:18:33.935 Identify (06h): Supported 00:18:33.935 Abort (08h): Supported 00:18:33.935 Set Features (09h): Supported 00:18:33.935 Get Features (0Ah): Supported 00:18:33.935 Asynchronous Event Request (0Ch): Supported 00:18:33.935 Keep Alive (18h): Supported 00:18:33.935 I/O Commands 00:18:33.935 ------------ 00:18:33.935 Flush (00h): Supported LBA-Change 00:18:33.935 Write (01h): Supported LBA-Change 00:18:33.935 Read (02h): Supported 00:18:33.935 Compare (05h): Supported 00:18:33.935 Write Zeroes (08h): Supported LBA-Change 00:18:33.935 Dataset Management (09h): Supported LBA-Change 00:18:33.935 Copy (19h): Supported LBA-Change 00:18:33.935 00:18:33.935 Error Log 00:18:33.935 ========= 00:18:33.935 00:18:33.935 Arbitration 00:18:33.935 =========== 00:18:33.935 Arbitration Burst: 1 00:18:33.935 00:18:33.935 Power Management 00:18:33.935 ================ 00:18:33.935 Number of Power States: 1 00:18:33.935 Current Power State: Power State #0 00:18:33.935 Power State #0: 00:18:33.935 Max Power: 0.00 W 00:18:33.935 Non-Operational State: Operational 00:18:33.935 Entry Latency: Not Reported 00:18:33.935 Exit Latency: Not Reported 00:18:33.935 Relative Read Throughput: 0 00:18:33.935 Relative Read Latency: 0 00:18:33.935 Relative Write Throughput: 0 00:18:33.935 Relative Write Latency: 0 00:18:33.935 Idle Power: Not Reported 00:18:33.935 Active Power: Not Reported 00:18:33.935 Non-Operational Permissive Mode: Not Supported 00:18:33.935 00:18:33.935 Health Information 00:18:33.935 ================== 00:18:33.935 Critical Warnings: 00:18:33.935 Available Spare Space: OK 00:18:33.935 Temperature: OK 00:18:33.935 Device Reliability: OK 00:18:33.935 Read Only: No 00:18:33.935 Volatile Memory Backup: OK 00:18:33.935 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:33.935 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:33.935 Available Spare: 0% 00:18:33.935 Available Sp[2024-12-16 16:24:22.423219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:33.935 [2024-12-16 16:24:22.431102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:33.935 [2024-12-16 16:24:22.431133] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:33.935 [2024-12-16 16:24:22.431141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.935 [2024-12-16 16:24:22.431147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.935 [2024-12-16 16:24:22.431153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.935 [2024-12-16 16:24:22.431158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.935 [2024-12-16 16:24:22.431214] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:33.935 [2024-12-16 16:24:22.431226] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:33.935 [2024-12-16 16:24:22.432210] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:33.935 [2024-12-16 16:24:22.432252] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:33.935 [2024-12-16 16:24:22.432258] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:33.935 [2024-12-16 16:24:22.433212] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:33.935 [2024-12-16 16:24:22.433224] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:33.935 [2024-12-16 16:24:22.433273] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:33.935 [2024-12-16 16:24:22.434231] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:33.935 are Threshold: 0% 00:18:33.935 Life Percentage Used: 0% 00:18:33.935 Data Units Read: 0 00:18:33.935 Data Units Written: 0 00:18:33.935 Host Read Commands: 0 00:18:33.935 Host Write Commands: 0 00:18:33.935 Controller Busy Time: 0 minutes 00:18:33.935 Power Cycles: 0 00:18:33.935 Power On Hours: 0 hours 00:18:33.935 Unsafe Shutdowns: 0 00:18:33.935 Unrecoverable Media Errors: 0 00:18:33.935 Lifetime Error Log Entries: 0 00:18:33.935 Warning Temperature Time: 0 minutes 00:18:33.935 Critical Temperature Time: 0 minutes 00:18:33.935 00:18:33.935 Number of Queues 00:18:33.935 ================ 00:18:33.935 Number of I/O Submission Queues: 127 00:18:33.935 Number of I/O Completion Queues: 127 00:18:33.935 00:18:33.935 Active Namespaces 00:18:33.935 ================= 00:18:33.935 Namespace ID:1 00:18:33.935 Error Recovery Timeout: Unlimited 00:18:33.935 Command Set Identifier: NVM (00h) 00:18:33.935 Deallocate: Supported 00:18:33.935 Deallocated/Unwritten Error: Not Supported 00:18:33.935 Deallocated Read Value: Unknown 00:18:33.935 Deallocate in Write Zeroes: Not Supported 00:18:33.935 Deallocated Guard Field: 0xFFFF 00:18:33.935 Flush: Supported 00:18:33.935 Reservation: Supported 00:18:33.935 Namespace Sharing Capabilities: Multiple Controllers 00:18:33.935 Size (in LBAs): 131072 (0GiB) 00:18:33.935 Capacity (in LBAs): 131072 (0GiB) 00:18:33.935 Utilization (in LBAs): 131072 (0GiB) 00:18:33.935 NGUID: BE7B9C5E893246BF8BD941A68A554733 00:18:33.935 UUID: be7b9c5e-8932-46bf-8bd9-41a68a554733 00:18:33.935 Thin Provisioning: Not Supported 00:18:33.935 Per-NS Atomic Units: Yes 00:18:33.935 Atomic Boundary Size (Normal): 0 00:18:33.935 Atomic Boundary Size (PFail): 0 00:18:33.935 Atomic Boundary Offset: 0 00:18:33.935 Maximum Single Source Range Length: 65535 00:18:33.935 Maximum Copy Length: 65535 00:18:33.935 Maximum Source Range Count: 1 00:18:33.935 NGUID/EUI64 Never Reused: No 00:18:33.935 Namespace Write Protected: No 00:18:33.935 Number of LBA Formats: 1 00:18:33.935 Current LBA Format: LBA Format #00 00:18:33.935 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:33.935 00:18:33.935 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:34.194 [2024-12-16 16:24:22.662478] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:39.464 Initializing NVMe Controllers 00:18:39.464 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:39.464 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:39.464 Initialization complete. Launching workers. 00:18:39.464 ======================================================== 00:18:39.464 Latency(us) 00:18:39.464 Device Information : IOPS MiB/s Average min max 00:18:39.464 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39906.12 155.88 3207.37 976.23 6668.06 00:18:39.464 ======================================================== 00:18:39.464 Total : 39906.12 155.88 3207.37 976.23 6668.06 00:18:39.464 00:18:39.464 [2024-12-16 16:24:27.768357] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.464 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:39.465 [2024-12-16 16:24:27.996022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.736 Initializing NVMe Controllers 00:18:44.736 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:44.736 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:44.736 Initialization complete. Launching workers. 00:18:44.736 ======================================================== 00:18:44.736 Latency(us) 00:18:44.736 Device Information : IOPS MiB/s Average min max 00:18:44.736 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39858.97 155.70 3210.90 971.91 9591.39 00:18:44.736 ======================================================== 00:18:44.736 Total : 39858.97 155.70 3210.90 971.91 9591.39 00:18:44.736 00:18:44.736 [2024-12-16 16:24:33.014949] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.736 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:44.736 [2024-12-16 16:24:33.224241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:50.003 [2024-12-16 16:24:38.343187] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:50.004 Initializing NVMe Controllers 00:18:50.004 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:50.004 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:50.004 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:50.004 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:50.004 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:50.004 Initialization complete. Launching workers. 00:18:50.004 Starting thread on core 2 00:18:50.004 Starting thread on core 3 00:18:50.004 Starting thread on core 1 00:18:50.004 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:50.262 [2024-12-16 16:24:38.635553] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.551 [2024-12-16 16:24:41.695366] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.551 Initializing NVMe Controllers 00:18:53.551 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.551 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.551 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:53.551 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:53.551 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:53.551 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:53.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:53.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:53.551 Initialization complete. Launching workers. 00:18:53.551 Starting thread on core 1 with urgent priority queue 00:18:53.551 Starting thread on core 2 with urgent priority queue 00:18:53.551 Starting thread on core 3 with urgent priority queue 00:18:53.551 Starting thread on core 0 with urgent priority queue 00:18:53.551 SPDK bdev Controller (SPDK2 ) core 0: 9794.67 IO/s 10.21 secs/100000 ios 00:18:53.551 SPDK bdev Controller (SPDK2 ) core 1: 8258.33 IO/s 12.11 secs/100000 ios 00:18:53.551 SPDK bdev Controller (SPDK2 ) core 2: 7743.67 IO/s 12.91 secs/100000 ios 00:18:53.551 SPDK bdev Controller (SPDK2 ) core 3: 10675.33 IO/s 9.37 secs/100000 ios 00:18:53.551 ======================================================== 00:18:53.551 00:18:53.551 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:53.551 [2024-12-16 16:24:41.978516] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.551 Initializing NVMe Controllers 00:18:53.551 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.551 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.551 Namespace ID: 1 size: 0GB 00:18:53.551 Initialization complete. 00:18:53.551 INFO: using host memory buffer for IO 00:18:53.551 Hello world! 00:18:53.551 [2024-12-16 16:24:41.991608] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.551 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:53.810 [2024-12-16 16:24:42.266400] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:55.187 Initializing NVMe Controllers 00:18:55.187 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.187 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.187 Initialization complete. Launching workers. 00:18:55.187 submit (in ns) avg, min, max = 7720.1, 3156.2, 4001174.3 00:18:55.187 complete (in ns) avg, min, max = 20537.9, 1720.0, 4994866.7 00:18:55.187 00:18:55.187 Submit histogram 00:18:55.187 ================ 00:18:55.187 Range in us Cumulative Count 00:18:55.187 3.154 - 3.170: 0.0122% ( 2) 00:18:55.187 3.170 - 3.185: 0.0305% ( 3) 00:18:55.187 3.185 - 3.200: 0.1161% ( 14) 00:18:55.187 3.200 - 3.215: 0.8492% ( 120) 00:18:55.187 3.215 - 3.230: 3.8917% ( 498) 00:18:55.187 3.230 - 3.246: 9.0054% ( 837) 00:18:55.187 3.246 - 3.261: 14.3939% ( 882) 00:18:55.187 3.261 - 3.276: 21.7070% ( 1197) 00:18:55.187 3.276 - 3.291: 29.6249% ( 1296) 00:18:55.187 3.291 - 3.307: 36.1376% ( 1066) 00:18:55.187 3.307 - 3.322: 41.4895% ( 876) 00:18:55.187 3.322 - 3.337: 45.8333% ( 711) 00:18:55.187 3.337 - 3.352: 50.6048% ( 781) 00:18:55.187 3.352 - 3.368: 54.6065% ( 655) 00:18:55.187 3.368 - 3.383: 59.9523% ( 875) 00:18:55.187 3.383 - 3.398: 66.8866% ( 1135) 00:18:55.187 3.398 - 3.413: 72.0369% ( 843) 00:18:55.187 3.413 - 3.429: 77.8043% ( 944) 00:18:55.187 3.429 - 3.444: 82.3130% ( 738) 00:18:55.187 3.444 - 3.459: 85.0806% ( 453) 00:18:55.187 3.459 - 3.474: 86.6325% ( 254) 00:18:55.187 3.474 - 3.490: 87.2984% ( 109) 00:18:55.187 3.490 - 3.505: 87.8360% ( 88) 00:18:55.187 3.505 - 3.520: 88.2942% ( 75) 00:18:55.187 3.520 - 3.535: 88.9296% ( 104) 00:18:55.187 3.535 - 3.550: 89.7116% ( 128) 00:18:55.187 3.550 - 3.566: 90.7014% ( 162) 00:18:55.187 3.566 - 3.581: 91.6178% ( 150) 00:18:55.187 3.581 - 3.596: 92.5709% ( 156) 00:18:55.187 3.596 - 3.611: 93.4018% ( 136) 00:18:55.187 3.611 - 3.627: 94.1349% ( 120) 00:18:55.187 3.627 - 3.642: 95.0024% ( 142) 00:18:55.187 3.642 - 3.657: 95.8822% ( 144) 00:18:55.187 3.657 - 3.672: 96.6520% ( 126) 00:18:55.187 3.672 - 3.688: 97.2630% ( 100) 00:18:55.187 3.688 - 3.703: 97.7639% ( 82) 00:18:55.187 3.703 - 3.718: 98.1733% ( 67) 00:18:55.187 3.718 - 3.733: 98.5398% ( 60) 00:18:55.187 3.733 - 3.749: 98.7964% ( 42) 00:18:55.187 3.749 - 3.764: 98.9858% ( 31) 00:18:55.187 3.764 - 3.779: 99.1691% ( 30) 00:18:55.187 3.779 - 3.794: 99.2546% ( 14) 00:18:55.187 3.794 - 3.810: 99.3096% ( 9) 00:18:55.187 3.810 - 3.825: 99.3463% ( 6) 00:18:55.187 3.825 - 3.840: 99.3585% ( 2) 00:18:55.187 3.840 - 3.855: 99.3707% ( 2) 00:18:55.187 3.855 - 3.870: 99.3829% ( 2) 00:18:55.187 3.870 - 3.886: 99.4013% ( 3) 00:18:55.187 3.886 - 3.901: 99.4135% ( 2) 00:18:55.187 3.931 - 3.962: 99.4196% ( 1) 00:18:55.187 3.962 - 3.992: 99.4257% ( 1) 00:18:55.187 4.053 - 4.084: 99.4501% ( 4) 00:18:55.187 4.114 - 4.145: 99.4563% ( 1) 00:18:55.187 4.145 - 4.175: 99.4624% ( 1) 00:18:55.187 4.175 - 4.206: 99.4685% ( 1) 00:18:55.187 4.206 - 4.236: 99.4746% ( 1) 00:18:55.187 4.236 - 4.267: 99.4868% ( 2) 00:18:55.187 4.328 - 4.358: 99.4990% ( 2) 00:18:55.187 4.358 - 4.389: 99.5112% ( 2) 00:18:55.187 4.450 - 4.480: 99.5174% ( 1) 00:18:55.187 4.571 - 4.602: 99.5235% ( 1) 00:18:55.187 4.632 - 4.663: 99.5296% ( 1) 00:18:55.187 4.663 - 4.693: 99.5357% ( 1) 00:18:55.187 4.937 - 4.968: 99.5418% ( 1) 00:18:55.187 4.968 - 4.998: 99.5479% ( 1) 00:18:55.187 5.059 - 5.090: 99.5601% ( 2) 00:18:55.187 5.090 - 5.120: 99.5662% ( 1) 00:18:55.187 5.120 - 5.150: 99.5723% ( 1) 00:18:55.187 5.303 - 5.333: 99.5846% ( 2) 00:18:55.187 5.333 - 5.364: 99.5907% ( 1) 00:18:55.187 5.364 - 5.394: 99.6090% ( 3) 00:18:55.187 5.394 - 5.425: 99.6151% ( 1) 00:18:55.187 5.486 - 5.516: 99.6212% ( 1) 00:18:55.187 5.547 - 5.577: 99.6273% ( 1) 00:18:55.187 5.638 - 5.669: 99.6395% ( 2) 00:18:55.187 5.821 - 5.851: 99.6518% ( 2) 00:18:55.187 5.912 - 5.943: 99.6579% ( 1) 00:18:55.187 5.943 - 5.973: 99.6640% ( 1) 00:18:55.187 6.004 - 6.034: 99.6701% ( 1) 00:18:55.187 6.034 - 6.065: 99.6762% ( 1) 00:18:55.187 6.095 - 6.126: 99.6823% ( 1) 00:18:55.187 6.126 - 6.156: 99.6945% ( 2) 00:18:55.187 6.217 - 6.248: 99.7006% ( 1) 00:18:55.187 6.278 - 6.309: 99.7067% ( 1) 00:18:55.187 6.309 - 6.339: 99.7129% ( 1) 00:18:55.187 6.430 - 6.461: 99.7251% ( 2) 00:18:55.187 6.461 - 6.491: 99.7312% ( 1) 00:18:55.187 6.766 - 6.796: 99.7373% ( 1) 00:18:55.187 6.827 - 6.857: 99.7434% ( 1) 00:18:55.187 6.888 - 6.918: 99.7495% ( 1) 00:18:55.187 6.979 - 7.010: 99.7617% ( 2) 00:18:55.187 7.070 - 7.101: 99.7678% ( 1) 00:18:55.187 7.253 - 7.284: 99.7739% ( 1) 00:18:55.187 7.314 - 7.345: 99.7862% ( 2) 00:18:55.187 7.375 - 7.406: 99.7984% ( 2) 00:18:55.187 7.436 - 7.467: 99.8106% ( 2) 00:18:55.187 7.863 - 7.924: 99.8167% ( 1) 00:18:55.187 8.046 - 8.107: 99.8289% ( 2) 00:18:55.187 8.655 - 8.716: 99.8350% ( 1) 00:18:55.187 9.570 - 9.630: 99.8412% ( 1) 00:18:55.187 9.630 - 9.691: 99.8473% ( 1) 00:18:55.187 10.606 - 10.667: 99.8534% ( 1) 00:18:55.187 13.653 - 13.714: 99.8595% ( 1) 00:18:55.187 13.836 - 13.897: 99.8656% ( 1) 00:18:55.188 14.019 - 14.080: 99.8717% ( 1) 00:18:55.188 15.604 - 15.726: 99.8778% ( 1) 00:18:55.188 19.017 - 19.139: 99.8839% ( 1) 00:18:55.188 40.472 - 40.716: 99.8900% ( 1) 00:18:55.188 3027.139 - 3042.743: 99.8961% ( 1) 00:18:55.188 3994.575 - 4025.783: 100.0000% ( 17) 00:18:55.188 00:18:55.188 Complete histogram 00:18:55.188 ================== 00:18:55.188 Range in us Cumulative Count 00:18:55.188 1.714 - 1.722: 0.0122% ( 2) 00:18:55.188 1.722 - 1.730: 0.0305% ( 3) 00:18:55.188 1.730 - 1.737: 0.0733% ( 7) 00:18:55.188 1.737 - 1.745: 0.1161% ( 7) 00:18:55.188 1.745 - 1.752: 0.1405% ( 4) 00:18:55.188 1.752 - 1.760: 0.2138% ( 12) 00:18:55.188 1.760 - 1.768: 0.8248% ( 100) 00:18:55.188 1.768 - 1.775: 5.9934% ( 846) 00:18:55.188 1.775 - 1.783: 19.4220% ( 2198) 00:18:55.188 1.783 - 1.790: 32.8507% ( 2198) 00:18:55.188 1.790 - 1.798: 40.1332% ( 1192) 00:18:55.188 1.798 - 1.806: 43.2674% ( 513) 00:18:55.188 1.806 - 1.813: 46.6520% ( 554) 00:18:55.188 1.813 - 1.821: 56.1400% ( 1553) 00:18:55.188 1.821 - 1.829: 72.4829% ( 2675) 00:18:55.188 1.829 - 1.836: 85.3617% ( 2108) 00:18:55.188 1.836 - 1.844: 91.5506% ( 1013) 00:18:55.188 1.844 - 1.851: 94.2204% ( 437) 00:18:55.188 1.851 - 1.859: 95.9677% ( 286) 00:18:55.188 1.859 - 1.867: 96.8903% ( 151) 00:18:55.188 1.867 - 1.874: 97.5012% ( 100) 00:18:55.188 1.874 - 1.882: 97.7700% ( 44) 00:18:55.188 1.882 - 1.890: 98.0022% ( 38) 00:18:55.188 1.890 - 1.897: 98.2649% ( 43) 00:18:55.188 1.897 - 1.905: 98.5093% ( 40) 00:18:55.188 1.905 - 1.912: 98.6865% ( 29) 00:18:55.188 1.912 - 1.920: 98.8025% ( 19) 00:18:55.188 1.920 - 1.928: 98.8575% ( 9) 00:18:55.188 1.928 - 1.935: 98.9064% ( 8) 00:18:55.188 1.935 - 1.943: 98.9675% ( 10) 00:18:55.188 1.943 - 1.950: 98.9858% ( 3) 00:18:55.188 1.950 - 1.966: 98.9919% ( 1) 00:18:55.188 1.966 - 1.981: 99.0103% ( 3) 00:18:55.188 2.011 - 2.027: 99.0469% ( 6) 00:18:55.188 2.027 - 2.042: 99.0775% ( 5) 00:18:55.188 2.042 - 2.057: 99.0958% ( 3) 00:18:55.188 2.057 - 2.072: 99.1263% ( 5) 00:18:55.188 2.164 - 2.179: 99.1447% ( 3) 00:18:55.188 2.240 - 2.255: 99.1569% ( 2) 00:18:55.188 2.255 - 2.270: 99.1752% ( 3) 00:18:55.188 2.270 - 2.286: 99.1813% ( 1) 00:18:55.188 2.316 - 2.331: 99.1935% ( 2) 00:18:55.188 2.331 - 2.347: 99.1997% ( 1) 00:18:55.188 2.347 - 2.362: 99.2058% ( 1) 00:18:55.188 2.423 - 2.438: 99.2119% ( 1) 00:18:55.188 2.453 - 2.469: 99.2180% ( 1) 00:18:55.188 2.499 - 2.514: 99.2241% ( 1) 00:18:55.188 2.530 - 2.545: 99.2302% ( 1) 00:18:55.188 2.545 - 2.560: 99.2363% ( 1) 00:18:55.188 2.560 - 2.575: 99.2424% ( 1) 00:18:55.188 2.590 - 2.606: 99.2485% ( 1) 00:18:55.188 2.606 - 2.621: 99.2546% ( 1) 00:18:55.188 2.636 - 2.651: 99.2608% ( 1) 00:18:55.188 2.651 - 2.667: 99.2730% ( 2) 00:18:55.188 2.697 - 2.7[2024-12-16 16:24:43.367146] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:55.188 12: 99.2791% ( 1) 00:18:55.188 2.834 - 2.850: 99.2852% ( 1) 00:18:55.188 2.910 - 2.926: 99.2913% ( 1) 00:18:55.188 3.520 - 3.535: 99.2974% ( 1) 00:18:55.188 3.627 - 3.642: 99.3035% ( 1) 00:18:55.188 3.825 - 3.840: 99.3096% ( 1) 00:18:55.188 3.962 - 3.992: 99.3157% ( 1) 00:18:55.188 3.992 - 4.023: 99.3280% ( 2) 00:18:55.188 4.084 - 4.114: 99.3341% ( 1) 00:18:55.188 4.145 - 4.175: 99.3402% ( 1) 00:18:55.188 4.236 - 4.267: 99.3524% ( 2) 00:18:55.188 4.328 - 4.358: 99.3585% ( 1) 00:18:55.188 4.419 - 4.450: 99.3646% ( 1) 00:18:55.188 4.450 - 4.480: 99.3768% ( 2) 00:18:55.188 4.480 - 4.510: 99.3829% ( 1) 00:18:55.188 4.571 - 4.602: 99.3891% ( 1) 00:18:55.188 4.663 - 4.693: 99.3952% ( 1) 00:18:55.188 4.724 - 4.754: 99.4013% ( 1) 00:18:55.188 4.907 - 4.937: 99.4074% ( 1) 00:18:55.188 4.968 - 4.998: 99.4135% ( 1) 00:18:55.188 4.998 - 5.029: 99.4196% ( 1) 00:18:55.188 5.029 - 5.059: 99.4257% ( 1) 00:18:55.188 5.059 - 5.090: 99.4318% ( 1) 00:18:55.188 5.486 - 5.516: 99.4379% ( 1) 00:18:55.188 5.730 - 5.760: 99.4440% ( 1) 00:18:55.188 6.248 - 6.278: 99.4563% ( 2) 00:18:55.188 6.552 - 6.583: 99.4685% ( 2) 00:18:55.188 6.918 - 6.949: 99.4746% ( 1) 00:18:55.188 7.771 - 7.802: 99.4807% ( 1) 00:18:55.188 8.046 - 8.107: 99.4868% ( 1) 00:18:55.188 9.691 - 9.752: 99.4929% ( 1) 00:18:55.188 10.240 - 10.301: 99.4990% ( 1) 00:18:55.188 10.850 - 10.910: 99.5051% ( 1) 00:18:55.188 12.190 - 12.251: 99.5112% ( 1) 00:18:55.188 17.554 - 17.676: 99.5174% ( 1) 00:18:55.188 25.112 - 25.234: 99.5235% ( 1) 00:18:55.188 26.088 - 26.210: 99.5296% ( 1) 00:18:55.188 3011.535 - 3027.139: 99.5357% ( 1) 00:18:55.188 3339.215 - 3354.819: 99.5418% ( 1) 00:18:55.188 3354.819 - 3370.423: 99.5479% ( 1) 00:18:55.188 3978.971 - 3994.575: 99.5601% ( 2) 00:18:55.188 3994.575 - 4025.783: 99.9939% ( 71) 00:18:55.188 4993.219 - 5024.427: 100.0000% ( 1) 00:18:55.188 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:55.188 [ 00:18:55.188 { 00:18:55.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:55.188 "subtype": "Discovery", 00:18:55.188 "listen_addresses": [], 00:18:55.188 "allow_any_host": true, 00:18:55.188 "hosts": [] 00:18:55.188 }, 00:18:55.188 { 00:18:55.188 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:55.188 "subtype": "NVMe", 00:18:55.188 "listen_addresses": [ 00:18:55.188 { 00:18:55.188 "trtype": "VFIOUSER", 00:18:55.188 "adrfam": "IPv4", 00:18:55.188 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:55.188 "trsvcid": "0" 00:18:55.188 } 00:18:55.188 ], 00:18:55.188 "allow_any_host": true, 00:18:55.188 "hosts": [], 00:18:55.188 "serial_number": "SPDK1", 00:18:55.188 "model_number": "SPDK bdev Controller", 00:18:55.188 "max_namespaces": 32, 00:18:55.188 "min_cntlid": 1, 00:18:55.188 "max_cntlid": 65519, 00:18:55.188 "namespaces": [ 00:18:55.188 { 00:18:55.188 "nsid": 1, 00:18:55.188 "bdev_name": "Malloc1", 00:18:55.188 "name": "Malloc1", 00:18:55.188 "nguid": "51A10F313E25433AA579142750727A65", 00:18:55.188 "uuid": "51a10f31-3e25-433a-a579-142750727a65" 00:18:55.188 }, 00:18:55.188 { 00:18:55.188 "nsid": 2, 00:18:55.188 "bdev_name": "Malloc3", 00:18:55.188 "name": "Malloc3", 00:18:55.188 "nguid": "C84488F1DD1E4E158F56B4725CC34874", 00:18:55.188 "uuid": "c84488f1-dd1e-4e15-8f56-b4725cc34874" 00:18:55.188 } 00:18:55.188 ] 00:18:55.188 }, 00:18:55.188 { 00:18:55.188 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:55.188 "subtype": "NVMe", 00:18:55.188 "listen_addresses": [ 00:18:55.188 { 00:18:55.188 "trtype": "VFIOUSER", 00:18:55.188 "adrfam": "IPv4", 00:18:55.188 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:55.188 "trsvcid": "0" 00:18:55.188 } 00:18:55.188 ], 00:18:55.188 "allow_any_host": true, 00:18:55.188 "hosts": [], 00:18:55.188 "serial_number": "SPDK2", 00:18:55.188 "model_number": "SPDK bdev Controller", 00:18:55.188 "max_namespaces": 32, 00:18:55.188 "min_cntlid": 1, 00:18:55.188 "max_cntlid": 65519, 00:18:55.188 "namespaces": [ 00:18:55.188 { 00:18:55.188 "nsid": 1, 00:18:55.188 "bdev_name": "Malloc2", 00:18:55.188 "name": "Malloc2", 00:18:55.188 "nguid": "BE7B9C5E893246BF8BD941A68A554733", 00:18:55.188 "uuid": "be7b9c5e-8932-46bf-8bd9-41a68a554733" 00:18:55.188 } 00:18:55.188 ] 00:18:55.188 } 00:18:55.188 ] 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=971752 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:55.188 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:55.189 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:55.189 [2024-12-16 16:24:43.769490] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:55.447 Malloc4 00:18:55.447 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:55.447 [2024-12-16 16:24:44.006433] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:55.448 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:55.448 Asynchronous Event Request test 00:18:55.448 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.448 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.448 Registering asynchronous event callbacks... 00:18:55.448 Starting namespace attribute notice tests for all controllers... 00:18:55.448 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:55.448 aer_cb - Changed Namespace 00:18:55.448 Cleaning up... 00:18:55.707 [ 00:18:55.707 { 00:18:55.707 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:55.707 "subtype": "Discovery", 00:18:55.707 "listen_addresses": [], 00:18:55.707 "allow_any_host": true, 00:18:55.707 "hosts": [] 00:18:55.707 }, 00:18:55.707 { 00:18:55.707 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:55.707 "subtype": "NVMe", 00:18:55.707 "listen_addresses": [ 00:18:55.707 { 00:18:55.707 "trtype": "VFIOUSER", 00:18:55.707 "adrfam": "IPv4", 00:18:55.707 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:55.707 "trsvcid": "0" 00:18:55.707 } 00:18:55.707 ], 00:18:55.707 "allow_any_host": true, 00:18:55.707 "hosts": [], 00:18:55.707 "serial_number": "SPDK1", 00:18:55.707 "model_number": "SPDK bdev Controller", 00:18:55.707 "max_namespaces": 32, 00:18:55.707 "min_cntlid": 1, 00:18:55.707 "max_cntlid": 65519, 00:18:55.707 "namespaces": [ 00:18:55.707 { 00:18:55.707 "nsid": 1, 00:18:55.707 "bdev_name": "Malloc1", 00:18:55.707 "name": "Malloc1", 00:18:55.707 "nguid": "51A10F313E25433AA579142750727A65", 00:18:55.707 "uuid": "51a10f31-3e25-433a-a579-142750727a65" 00:18:55.707 }, 00:18:55.707 { 00:18:55.707 "nsid": 2, 00:18:55.707 "bdev_name": "Malloc3", 00:18:55.707 "name": "Malloc3", 00:18:55.707 "nguid": "C84488F1DD1E4E158F56B4725CC34874", 00:18:55.707 "uuid": "c84488f1-dd1e-4e15-8f56-b4725cc34874" 00:18:55.707 } 00:18:55.707 ] 00:18:55.707 }, 00:18:55.707 { 00:18:55.707 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:55.707 "subtype": "NVMe", 00:18:55.707 "listen_addresses": [ 00:18:55.707 { 00:18:55.707 "trtype": "VFIOUSER", 00:18:55.707 "adrfam": "IPv4", 00:18:55.707 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:55.707 "trsvcid": "0" 00:18:55.707 } 00:18:55.707 ], 00:18:55.707 "allow_any_host": true, 00:18:55.707 "hosts": [], 00:18:55.707 "serial_number": "SPDK2", 00:18:55.707 "model_number": "SPDK bdev Controller", 00:18:55.707 "max_namespaces": 32, 00:18:55.707 "min_cntlid": 1, 00:18:55.707 "max_cntlid": 65519, 00:18:55.707 "namespaces": [ 00:18:55.707 { 00:18:55.707 "nsid": 1, 00:18:55.707 "bdev_name": "Malloc2", 00:18:55.707 "name": "Malloc2", 00:18:55.707 "nguid": "BE7B9C5E893246BF8BD941A68A554733", 00:18:55.707 "uuid": "be7b9c5e-8932-46bf-8bd9-41a68a554733" 00:18:55.707 }, 00:18:55.707 { 00:18:55.707 "nsid": 2, 00:18:55.707 "bdev_name": "Malloc4", 00:18:55.707 "name": "Malloc4", 00:18:55.707 "nguid": "DBFC5DAD764144E0B19F838AD36F9A2E", 00:18:55.707 "uuid": "dbfc5dad-7641-44e0-b19f-838ad36f9a2e" 00:18:55.707 } 00:18:55.707 ] 00:18:55.707 } 00:18:55.707 ] 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 971752 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 963617 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 963617 ']' 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 963617 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 963617 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 963617' 00:18:55.707 killing process with pid 963617 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 963617 00:18:55.707 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 963617 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=971803 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 971803' 00:18:55.967 Process pid: 971803 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 971803 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 971803 ']' 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.967 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:55.967 [2024-12-16 16:24:44.563565] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:55.967 [2024-12-16 16:24:44.564429] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:55.967 [2024-12-16 16:24:44.564471] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.226 [2024-12-16 16:24:44.641313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.226 [2024-12-16 16:24:44.663773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.226 [2024-12-16 16:24:44.663807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.226 [2024-12-16 16:24:44.663815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.226 [2024-12-16 16:24:44.663822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.226 [2024-12-16 16:24:44.663827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.226 [2024-12-16 16:24:44.665285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.226 [2024-12-16 16:24:44.665313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.226 [2024-12-16 16:24:44.665396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.226 [2024-12-16 16:24:44.665397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.226 [2024-12-16 16:24:44.729636] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:56.226 [2024-12-16 16:24:44.729670] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:56.226 [2024-12-16 16:24:44.730681] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:56.226 [2024-12-16 16:24:44.730954] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:56.226 [2024-12-16 16:24:44.730998] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:56.226 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.226 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:56.226 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:57.602 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:57.602 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:57.602 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:57.602 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:57.602 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:57.602 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:57.602 Malloc1 00:18:57.861 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:57.861 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:58.120 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:58.378 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:58.378 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:58.378 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:58.637 Malloc2 00:18:58.637 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:58.637 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:58.895 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 971803 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 971803 ']' 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 971803 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971803 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971803' 00:18:59.154 killing process with pid 971803 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 971803 00:18:59.154 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 971803 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:59.413 00:18:59.413 real 0m50.729s 00:18:59.413 user 3m16.126s 00:18:59.413 sys 0m3.336s 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:59.413 ************************************ 00:18:59.413 END TEST nvmf_vfio_user 00:18:59.413 ************************************ 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.413 ************************************ 00:18:59.413 START TEST nvmf_vfio_user_nvme_compliance 00:18:59.413 ************************************ 00:18:59.413 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:59.673 * Looking for test storage... 00:18:59.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:59.673 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.674 --rc genhtml_branch_coverage=1 00:18:59.674 --rc genhtml_function_coverage=1 00:18:59.674 --rc genhtml_legend=1 00:18:59.674 --rc geninfo_all_blocks=1 00:18:59.674 --rc geninfo_unexecuted_blocks=1 00:18:59.674 00:18:59.674 ' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.674 --rc genhtml_branch_coverage=1 00:18:59.674 --rc genhtml_function_coverage=1 00:18:59.674 --rc genhtml_legend=1 00:18:59.674 --rc geninfo_all_blocks=1 00:18:59.674 --rc geninfo_unexecuted_blocks=1 00:18:59.674 00:18:59.674 ' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.674 --rc genhtml_branch_coverage=1 00:18:59.674 --rc genhtml_function_coverage=1 00:18:59.674 --rc genhtml_legend=1 00:18:59.674 --rc geninfo_all_blocks=1 00:18:59.674 --rc geninfo_unexecuted_blocks=1 00:18:59.674 00:18:59.674 ' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.674 --rc genhtml_branch_coverage=1 00:18:59.674 --rc genhtml_function_coverage=1 00:18:59.674 --rc genhtml_legend=1 00:18:59.674 --rc geninfo_all_blocks=1 00:18:59.674 --rc geninfo_unexecuted_blocks=1 00:18:59.674 00:18:59.674 ' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=972524 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 972524' 00:18:59.674 Process pid: 972524 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 972524 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 972524 ']' 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.674 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.675 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.675 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.675 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:59.675 [2024-12-16 16:24:48.241093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:59.675 [2024-12-16 16:24:48.241143] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.933 [2024-12-16 16:24:48.314541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:59.933 [2024-12-16 16:24:48.336658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.934 [2024-12-16 16:24:48.336695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.934 [2024-12-16 16:24:48.336701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.934 [2024-12-16 16:24:48.336707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.934 [2024-12-16 16:24:48.336712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.934 [2024-12-16 16:24:48.337962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.934 [2024-12-16 16:24:48.338071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.934 [2024-12-16 16:24:48.338073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.934 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.934 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:59.934 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:00.869 malloc0 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.869 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.128 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:01.128 00:19:01.128 00:19:01.128 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.128 http://cunit.sourceforge.net/ 00:19:01.128 00:19:01.128 00:19:01.128 Suite: nvme_compliance 00:19:01.128 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-16 16:24:49.656638] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.128 [2024-12-16 16:24:49.657981] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:01.128 [2024-12-16 16:24:49.657996] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:01.128 [2024-12-16 16:24:49.658002] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:01.128 [2024-12-16 16:24:49.660662] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.128 passed 00:19:01.387 Test: admin_identify_ctrlr_verify_fused ...[2024-12-16 16:24:49.740232] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.387 [2024-12-16 16:24:49.743254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.387 passed 00:19:01.387 Test: admin_identify_ns ...[2024-12-16 16:24:49.822302] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.387 [2024-12-16 16:24:49.882104] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:01.387 [2024-12-16 16:24:49.890103] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:01.387 [2024-12-16 16:24:49.911184] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.387 passed 00:19:01.387 Test: admin_get_features_mandatory_features ...[2024-12-16 16:24:49.986773] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.387 [2024-12-16 16:24:49.992804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.645 passed 00:19:01.645 Test: admin_get_features_optional_features ...[2024-12-16 16:24:50.075353] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.645 [2024-12-16 16:24:50.078376] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.645 passed 00:19:01.645 Test: admin_set_features_number_of_queues ...[2024-12-16 16:24:50.153463] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.903 [2024-12-16 16:24:50.262266] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.903 passed 00:19:01.903 Test: admin_get_log_page_mandatory_logs ...[2024-12-16 16:24:50.338175] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.903 [2024-12-16 16:24:50.341199] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.903 passed 00:19:01.904 Test: admin_get_log_page_with_lpo ...[2024-12-16 16:24:50.416025] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.904 [2024-12-16 16:24:50.485107] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:01.904 [2024-12-16 16:24:50.498162] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.162 passed 00:19:02.162 Test: fabric_property_get ...[2024-12-16 16:24:50.571771] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.162 [2024-12-16 16:24:50.573001] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:02.162 [2024-12-16 16:24:50.574790] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.162 passed 00:19:02.162 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-16 16:24:50.652318] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.162 [2024-12-16 16:24:50.653545] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:02.162 [2024-12-16 16:24:50.655331] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.162 passed 00:19:02.162 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-16 16:24:50.733042] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.420 [2024-12-16 16:24:50.817105] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.420 [2024-12-16 16:24:50.833108] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.420 [2024-12-16 16:24:50.838192] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.420 passed 00:19:02.420 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-16 16:24:50.909711] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.420 [2024-12-16 16:24:50.910947] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:02.420 [2024-12-16 16:24:50.912736] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.420 passed 00:19:02.420 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-16 16:24:50.990441] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.679 [2024-12-16 16:24:51.066100] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:02.679 [2024-12-16 16:24:51.090108] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.679 [2024-12-16 16:24:51.095178] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.679 passed 00:19:02.679 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-16 16:24:51.170728] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.679 [2024-12-16 16:24:51.171958] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:02.679 [2024-12-16 16:24:51.171981] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:02.679 [2024-12-16 16:24:51.173751] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.679 passed 00:19:02.679 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-16 16:24:51.252401] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.938 [2024-12-16 16:24:51.345100] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:02.938 [2024-12-16 16:24:51.353128] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:02.938 [2024-12-16 16:24:51.361110] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:02.938 [2024-12-16 16:24:51.369104] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:02.938 [2024-12-16 16:24:51.398198] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.938 passed 00:19:02.938 Test: admin_create_io_sq_verify_pc ...[2024-12-16 16:24:51.470787] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.938 [2024-12-16 16:24:51.486112] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:02.938 [2024-12-16 16:24:51.506955] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.938 passed 00:19:03.196 Test: admin_create_io_qp_max_qps ...[2024-12-16 16:24:51.581487] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.130 [2024-12-16 16:24:52.689103] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:04.698 [2024-12-16 16:24:53.079819] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.698 passed 00:19:04.698 Test: admin_create_io_sq_shared_cq ...[2024-12-16 16:24:53.152628] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.698 [2024-12-16 16:24:53.285102] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:04.957 [2024-12-16 16:24:53.322170] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.957 passed 00:19:04.957 00:19:04.957 Run Summary: Type Total Ran Passed Failed Inactive 00:19:04.957 suites 1 1 n/a 0 0 00:19:04.957 tests 18 18 18 0 0 00:19:04.957 asserts 360 360 360 0 n/a 00:19:04.957 00:19:04.957 Elapsed time = 1.504 seconds 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 972524 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 972524 ']' 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 972524 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972524 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972524' 00:19:04.957 killing process with pid 972524 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 972524 00:19:04.957 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 972524 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:05.215 00:19:05.215 real 0m5.609s 00:19:05.215 user 0m15.727s 00:19:05.215 sys 0m0.491s 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.215 ************************************ 00:19:05.215 END TEST nvmf_vfio_user_nvme_compliance 00:19:05.215 ************************************ 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.215 ************************************ 00:19:05.215 START TEST nvmf_vfio_user_fuzz 00:19:05.215 ************************************ 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:05.215 * Looking for test storage... 00:19:05.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.215 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:05.216 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.475 --rc genhtml_branch_coverage=1 00:19:05.475 --rc genhtml_function_coverage=1 00:19:05.475 --rc genhtml_legend=1 00:19:05.475 --rc geninfo_all_blocks=1 00:19:05.475 --rc geninfo_unexecuted_blocks=1 00:19:05.475 00:19:05.475 ' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.475 --rc genhtml_branch_coverage=1 00:19:05.475 --rc genhtml_function_coverage=1 00:19:05.475 --rc genhtml_legend=1 00:19:05.475 --rc geninfo_all_blocks=1 00:19:05.475 --rc geninfo_unexecuted_blocks=1 00:19:05.475 00:19:05.475 ' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.475 --rc genhtml_branch_coverage=1 00:19:05.475 --rc genhtml_function_coverage=1 00:19:05.475 --rc genhtml_legend=1 00:19:05.475 --rc geninfo_all_blocks=1 00:19:05.475 --rc geninfo_unexecuted_blocks=1 00:19:05.475 00:19:05.475 ' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.475 --rc genhtml_branch_coverage=1 00:19:05.475 --rc genhtml_function_coverage=1 00:19:05.475 --rc genhtml_legend=1 00:19:05.475 --rc geninfo_all_blocks=1 00:19:05.475 --rc geninfo_unexecuted_blocks=1 00:19:05.475 00:19:05.475 ' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:05.475 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=973479 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 973479' 00:19:05.476 Process pid: 973479 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 973479 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 973479 ']' 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.476 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:05.734 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.734 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:05.734 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.670 malloc0 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.670 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:06.671 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.671 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.671 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.671 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:06.671 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:38.747 Fuzzing completed. Shutting down the fuzz application 00:19:38.747 00:19:38.747 Dumping successful admin opcodes: 00:19:38.747 9, 10, 00:19:38.747 Dumping successful io opcodes: 00:19:38.747 0, 00:19:38.747 NS: 0x20000081ef00 I/O qp, Total commands completed: 1165412, total successful commands: 4588, random_seed: 1373557568 00:19:38.747 NS: 0x20000081ef00 admin qp, Total commands completed: 287312, total successful commands: 67, random_seed: 3527730048 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 973479 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 973479 ']' 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 973479 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973479 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.747 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973479' 00:19:38.747 killing process with pid 973479 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 973479 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 973479 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:38.748 00:19:38.748 real 0m32.196s 00:19:38.748 user 0m34.283s 00:19:38.748 sys 0m26.674s 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.748 ************************************ 00:19:38.748 END TEST nvmf_vfio_user_fuzz 00:19:38.748 ************************************ 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.748 ************************************ 00:19:38.748 START TEST nvmf_auth_target 00:19:38.748 ************************************ 00:19:38.748 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:38.748 * Looking for test storage... 00:19:38.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.748 --rc genhtml_branch_coverage=1 00:19:38.748 --rc genhtml_function_coverage=1 00:19:38.748 --rc genhtml_legend=1 00:19:38.748 --rc geninfo_all_blocks=1 00:19:38.748 --rc geninfo_unexecuted_blocks=1 00:19:38.748 00:19:38.748 ' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.748 --rc genhtml_branch_coverage=1 00:19:38.748 --rc genhtml_function_coverage=1 00:19:38.748 --rc genhtml_legend=1 00:19:38.748 --rc geninfo_all_blocks=1 00:19:38.748 --rc geninfo_unexecuted_blocks=1 00:19:38.748 00:19:38.748 ' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.748 --rc genhtml_branch_coverage=1 00:19:38.748 --rc genhtml_function_coverage=1 00:19:38.748 --rc genhtml_legend=1 00:19:38.748 --rc geninfo_all_blocks=1 00:19:38.748 --rc geninfo_unexecuted_blocks=1 00:19:38.748 00:19:38.748 ' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.748 --rc genhtml_branch_coverage=1 00:19:38.748 --rc genhtml_function_coverage=1 00:19:38.748 --rc genhtml_legend=1 00:19:38.748 --rc geninfo_all_blocks=1 00:19:38.748 --rc geninfo_unexecuted_blocks=1 00:19:38.748 00:19:38.748 ' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.748 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.749 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:44.028 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:44.028 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.028 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:44.029 Found net devices under 0000:af:00.0: cvl_0_0 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:44.029 Found net devices under 0000:af:00.1: cvl_0_1 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:44.029 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:44.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:19:44.029 00:19:44.029 --- 10.0.0.2 ping statistics --- 00:19:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.029 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:19:44.029 00:19:44.029 --- 10.0.0.1 ping statistics --- 00:19:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.029 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=981791 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 981791 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 981791 ']' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=981813 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ae2329b6e3a4588b01c035ad66bd032da84ba9999921343b 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uUl 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ae2329b6e3a4588b01c035ad66bd032da84ba9999921343b 0 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ae2329b6e3a4588b01c035ad66bd032da84ba9999921343b 0 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ae2329b6e3a4588b01c035ad66bd032da84ba9999921343b 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uUl 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uUl 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.uUl 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:44.029 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b55b6c5e56d00afaea06cf6f150fe50bf52c13f8d043992d65b99ad2d359c1f 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.x4C 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b55b6c5e56d00afaea06cf6f150fe50bf52c13f8d043992d65b99ad2d359c1f 3 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b55b6c5e56d00afaea06cf6f150fe50bf52c13f8d043992d65b99ad2d359c1f 3 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b55b6c5e56d00afaea06cf6f150fe50bf52c13f8d043992d65b99ad2d359c1f 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.x4C 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.x4C 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.x4C 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc25483982e9b8ca3f215aa8b953367f 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.x89 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc25483982e9b8ca3f215aa8b953367f 1 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc25483982e9b8ca3f215aa8b953367f 1 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc25483982e9b8ca3f215aa8b953367f 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.x89 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.x89 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.x89 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=55dd5d2b3ecf17ff01971c4ed9fb4ca7072fbc39be8bd83c 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.F8c 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 55dd5d2b3ecf17ff01971c4ed9fb4ca7072fbc39be8bd83c 2 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 55dd5d2b3ecf17ff01971c4ed9fb4ca7072fbc39be8bd83c 2 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=55dd5d2b3ecf17ff01971c4ed9fb4ca7072fbc39be8bd83c 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.F8c 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.F8c 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.F8c 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=39b62db9904d690c4f8528f5baa7a1c6db966c27396393e9 00:19:44.030 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vwr 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 39b62db9904d690c4f8528f5baa7a1c6db966c27396393e9 2 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 39b62db9904d690c4f8528f5baa7a1c6db966c27396393e9 2 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=39b62db9904d690c4f8528f5baa7a1c6db966c27396393e9 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vwr 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vwr 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.vwr 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e5e8b6b347474b124d9d83d98bc78af 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UAR 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e5e8b6b347474b124d9d83d98bc78af 1 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e5e8b6b347474b124d9d83d98bc78af 1 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e5e8b6b347474b124d9d83d98bc78af 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UAR 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UAR 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.UAR 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:44.289 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eed860171c142663a8ebd989d61f22394fe185a61fbae8ce10af19dcd94cceab 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gmp 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eed860171c142663a8ebd989d61f22394fe185a61fbae8ce10af19dcd94cceab 3 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eed860171c142663a8ebd989d61f22394fe185a61fbae8ce10af19dcd94cceab 3 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eed860171c142663a8ebd989d61f22394fe185a61fbae8ce10af19dcd94cceab 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gmp 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gmp 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.gmp 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 981791 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 981791 ']' 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.290 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 981813 /var/tmp/host.sock 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 981813 ']' 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:44.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.548 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uUl 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uUl 00:19:44.807 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uUl 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.x4C ]] 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4C 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4C 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4C 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.x89 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.066 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.x89 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.x89 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.F8c ]] 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F8c 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.324 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F8c 00:19:45.325 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F8c 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vwr 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vwr 00:19:45.583 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vwr 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.UAR ]] 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UAR 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UAR 00:19:45.842 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UAR 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gmp 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gmp 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gmp 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.101 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.359 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:46.359 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.359 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.360 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.618 00:19:46.618 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.618 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.618 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.876 { 00:19:46.876 "cntlid": 1, 00:19:46.876 "qid": 0, 00:19:46.876 "state": "enabled", 00:19:46.876 "thread": "nvmf_tgt_poll_group_000", 00:19:46.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:46.876 "listen_address": { 00:19:46.876 "trtype": "TCP", 00:19:46.876 "adrfam": "IPv4", 00:19:46.876 "traddr": "10.0.0.2", 00:19:46.876 "trsvcid": "4420" 00:19:46.876 }, 00:19:46.876 "peer_address": { 00:19:46.876 "trtype": "TCP", 00:19:46.876 "adrfam": "IPv4", 00:19:46.876 "traddr": "10.0.0.1", 00:19:46.876 "trsvcid": "57570" 00:19:46.876 }, 00:19:46.876 "auth": { 00:19:46.876 "state": "completed", 00:19:46.876 "digest": "sha256", 00:19:46.876 "dhgroup": "null" 00:19:46.876 } 00:19:46.876 } 00:19:46.876 ]' 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.876 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.877 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.877 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.877 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.877 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.877 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.877 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.135 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:19:47.135 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:19:47.701 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.701 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:47.701 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.702 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.702 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.702 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.702 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.702 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.961 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.220 00:19:48.220 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.220 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.220 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.479 { 00:19:48.479 "cntlid": 3, 00:19:48.479 "qid": 0, 00:19:48.479 "state": "enabled", 00:19:48.479 "thread": "nvmf_tgt_poll_group_000", 00:19:48.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.479 "listen_address": { 00:19:48.479 "trtype": "TCP", 00:19:48.479 "adrfam": "IPv4", 00:19:48.479 "traddr": "10.0.0.2", 00:19:48.479 "trsvcid": "4420" 00:19:48.479 }, 00:19:48.479 "peer_address": { 00:19:48.479 "trtype": "TCP", 00:19:48.479 "adrfam": "IPv4", 00:19:48.479 "traddr": "10.0.0.1", 00:19:48.479 "trsvcid": "57600" 00:19:48.479 }, 00:19:48.479 "auth": { 00:19:48.479 "state": "completed", 00:19:48.479 "digest": "sha256", 00:19:48.479 "dhgroup": "null" 00:19:48.479 } 00:19:48.479 } 00:19:48.479 ]' 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:48.479 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.479 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.479 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.479 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.737 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:19:48.737 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.305 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.564 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.564 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.564 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.564 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.858 00:19:49.858 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.858 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.858 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.156 { 00:19:50.156 "cntlid": 5, 00:19:50.156 "qid": 0, 00:19:50.156 "state": "enabled", 00:19:50.156 "thread": "nvmf_tgt_poll_group_000", 00:19:50.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.156 "listen_address": { 00:19:50.156 "trtype": "TCP", 00:19:50.156 "adrfam": "IPv4", 00:19:50.156 "traddr": "10.0.0.2", 00:19:50.156 "trsvcid": "4420" 00:19:50.156 }, 00:19:50.156 "peer_address": { 00:19:50.156 "trtype": "TCP", 00:19:50.156 "adrfam": "IPv4", 00:19:50.156 "traddr": "10.0.0.1", 00:19:50.156 "trsvcid": "57628" 00:19:50.156 }, 00:19:50.156 "auth": { 00:19:50.156 "state": "completed", 00:19:50.156 "digest": "sha256", 00:19:50.156 "dhgroup": "null" 00:19:50.156 } 00:19:50.156 } 00:19:50.156 ]' 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.156 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.483 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:19:50.483 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:19:50.742 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.001 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.260 00:19:51.260 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.260 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.260 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.519 { 00:19:51.519 "cntlid": 7, 00:19:51.519 "qid": 0, 00:19:51.519 "state": "enabled", 00:19:51.519 "thread": "nvmf_tgt_poll_group_000", 00:19:51.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.519 "listen_address": { 00:19:51.519 "trtype": "TCP", 00:19:51.519 "adrfam": "IPv4", 00:19:51.519 "traddr": "10.0.0.2", 00:19:51.519 "trsvcid": "4420" 00:19:51.519 }, 00:19:51.519 "peer_address": { 00:19:51.519 "trtype": "TCP", 00:19:51.519 "adrfam": "IPv4", 00:19:51.519 "traddr": "10.0.0.1", 00:19:51.519 "trsvcid": "57656" 00:19:51.519 }, 00:19:51.519 "auth": { 00:19:51.519 "state": "completed", 00:19:51.519 "digest": "sha256", 00:19:51.519 "dhgroup": "null" 00:19:51.519 } 00:19:51.519 } 00:19:51.519 ]' 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.519 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.778 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.778 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.778 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.778 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.778 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.036 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:19:52.036 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.604 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.604 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.863 00:19:52.863 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.863 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.864 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.123 { 00:19:53.123 "cntlid": 9, 00:19:53.123 "qid": 0, 00:19:53.123 "state": "enabled", 00:19:53.123 "thread": "nvmf_tgt_poll_group_000", 00:19:53.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.123 "listen_address": { 00:19:53.123 "trtype": "TCP", 00:19:53.123 "adrfam": "IPv4", 00:19:53.123 "traddr": "10.0.0.2", 00:19:53.123 "trsvcid": "4420" 00:19:53.123 }, 00:19:53.123 "peer_address": { 00:19:53.123 "trtype": "TCP", 00:19:53.123 "adrfam": "IPv4", 00:19:53.123 "traddr": "10.0.0.1", 00:19:53.123 "trsvcid": "42390" 00:19:53.123 }, 00:19:53.123 "auth": { 00:19:53.123 "state": "completed", 00:19:53.123 "digest": "sha256", 00:19:53.123 "dhgroup": "ffdhe2048" 00:19:53.123 } 00:19:53.123 } 00:19:53.123 ]' 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.123 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.381 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.382 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.382 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.382 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:19:53.382 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.949 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.208 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.467 00:19:54.467 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.467 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.467 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.726 { 00:19:54.726 "cntlid": 11, 00:19:54.726 "qid": 0, 00:19:54.726 "state": "enabled", 00:19:54.726 "thread": "nvmf_tgt_poll_group_000", 00:19:54.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:54.726 "listen_address": { 00:19:54.726 "trtype": "TCP", 00:19:54.726 "adrfam": "IPv4", 00:19:54.726 "traddr": "10.0.0.2", 00:19:54.726 "trsvcid": "4420" 00:19:54.726 }, 00:19:54.726 "peer_address": { 00:19:54.726 "trtype": "TCP", 00:19:54.726 "adrfam": "IPv4", 00:19:54.726 "traddr": "10.0.0.1", 00:19:54.726 "trsvcid": "42422" 00:19:54.726 }, 00:19:54.726 "auth": { 00:19:54.726 "state": "completed", 00:19:54.726 "digest": "sha256", 00:19:54.726 "dhgroup": "ffdhe2048" 00:19:54.726 } 00:19:54.726 } 00:19:54.726 ]' 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.726 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.985 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:19:54.985 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.553 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.812 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.071 00:19:56.071 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.071 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.071 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.330 { 00:19:56.330 "cntlid": 13, 00:19:56.330 "qid": 0, 00:19:56.330 "state": "enabled", 00:19:56.330 "thread": "nvmf_tgt_poll_group_000", 00:19:56.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:56.330 "listen_address": { 00:19:56.330 "trtype": "TCP", 00:19:56.330 "adrfam": "IPv4", 00:19:56.330 "traddr": "10.0.0.2", 00:19:56.330 "trsvcid": "4420" 00:19:56.330 }, 00:19:56.330 "peer_address": { 00:19:56.330 "trtype": "TCP", 00:19:56.330 "adrfam": "IPv4", 00:19:56.330 "traddr": "10.0.0.1", 00:19:56.330 "trsvcid": "42436" 00:19:56.330 }, 00:19:56.330 "auth": { 00:19:56.330 "state": "completed", 00:19:56.330 "digest": "sha256", 00:19:56.330 "dhgroup": "ffdhe2048" 00:19:56.330 } 00:19:56.330 } 00:19:56.330 ]' 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.330 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.590 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:19:56.590 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.419 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.678 00:19:57.678 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.678 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.678 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.937 { 00:19:57.937 "cntlid": 15, 00:19:57.937 "qid": 0, 00:19:57.937 "state": "enabled", 00:19:57.937 "thread": "nvmf_tgt_poll_group_000", 00:19:57.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.937 "listen_address": { 00:19:57.937 "trtype": "TCP", 00:19:57.937 "adrfam": "IPv4", 00:19:57.937 "traddr": "10.0.0.2", 00:19:57.937 "trsvcid": "4420" 00:19:57.937 }, 00:19:57.937 "peer_address": { 00:19:57.937 "trtype": "TCP", 00:19:57.937 "adrfam": "IPv4", 00:19:57.937 "traddr": "10.0.0.1", 00:19:57.937 "trsvcid": "42474" 00:19:57.937 }, 00:19:57.937 "auth": { 00:19:57.937 "state": "completed", 00:19:57.937 "digest": "sha256", 00:19:57.937 "dhgroup": "ffdhe2048" 00:19:57.937 } 00:19:57.937 } 00:19:57.937 ]' 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.937 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.196 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:19:58.196 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.764 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.023 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:59.023 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.023 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.023 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.023 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.024 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.283 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.283 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.542 { 00:19:59.542 "cntlid": 17, 00:19:59.542 "qid": 0, 00:19:59.542 "state": "enabled", 00:19:59.542 "thread": "nvmf_tgt_poll_group_000", 00:19:59.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.542 "listen_address": { 00:19:59.542 "trtype": "TCP", 00:19:59.542 "adrfam": "IPv4", 00:19:59.542 "traddr": "10.0.0.2", 00:19:59.542 "trsvcid": "4420" 00:19:59.542 }, 00:19:59.542 "peer_address": { 00:19:59.542 "trtype": "TCP", 00:19:59.542 "adrfam": "IPv4", 00:19:59.542 "traddr": "10.0.0.1", 00:19:59.542 "trsvcid": "42498" 00:19:59.542 }, 00:19:59.542 "auth": { 00:19:59.542 "state": "completed", 00:19:59.542 "digest": "sha256", 00:19:59.542 "dhgroup": "ffdhe3072" 00:19:59.542 } 00:19:59.542 } 00:19:59.542 ]' 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.542 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.542 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.542 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.542 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.800 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:19:59.801 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.369 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.628 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.628 00:20:00.628 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.628 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.628 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.887 { 00:20:00.887 "cntlid": 19, 00:20:00.887 "qid": 0, 00:20:00.887 "state": "enabled", 00:20:00.887 "thread": "nvmf_tgt_poll_group_000", 00:20:00.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.887 "listen_address": { 00:20:00.887 "trtype": "TCP", 00:20:00.887 "adrfam": "IPv4", 00:20:00.887 "traddr": "10.0.0.2", 00:20:00.887 "trsvcid": "4420" 00:20:00.887 }, 00:20:00.887 "peer_address": { 00:20:00.887 "trtype": "TCP", 00:20:00.887 "adrfam": "IPv4", 00:20:00.887 "traddr": "10.0.0.1", 00:20:00.887 "trsvcid": "42524" 00:20:00.887 }, 00:20:00.887 "auth": { 00:20:00.887 "state": "completed", 00:20:00.887 "digest": "sha256", 00:20:00.887 "dhgroup": "ffdhe3072" 00:20:00.887 } 00:20:00.887 } 00:20:00.887 ]' 00:20:00.887 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.145 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.405 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:01.405 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.972 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.231 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.490 00:20:02.490 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.490 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.490 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.490 { 00:20:02.490 "cntlid": 21, 00:20:02.490 "qid": 0, 00:20:02.490 "state": "enabled", 00:20:02.490 "thread": "nvmf_tgt_poll_group_000", 00:20:02.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.490 "listen_address": { 00:20:02.490 "trtype": "TCP", 00:20:02.490 "adrfam": "IPv4", 00:20:02.490 "traddr": "10.0.0.2", 00:20:02.490 "trsvcid": "4420" 00:20:02.490 }, 00:20:02.490 "peer_address": { 00:20:02.490 "trtype": "TCP", 00:20:02.490 "adrfam": "IPv4", 00:20:02.490 "traddr": "10.0.0.1", 00:20:02.490 "trsvcid": "51280" 00:20:02.490 }, 00:20:02.490 "auth": { 00:20:02.490 "state": "completed", 00:20:02.490 "digest": "sha256", 00:20:02.490 "dhgroup": "ffdhe3072" 00:20:02.490 } 00:20:02.490 } 00:20:02.490 ]' 00:20:02.490 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.748 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.006 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:03.006 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.573 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.573 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.832 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.091 { 00:20:04.091 "cntlid": 23, 00:20:04.091 "qid": 0, 00:20:04.091 "state": "enabled", 00:20:04.091 "thread": "nvmf_tgt_poll_group_000", 00:20:04.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.091 "listen_address": { 00:20:04.091 "trtype": "TCP", 00:20:04.091 "adrfam": "IPv4", 00:20:04.091 "traddr": "10.0.0.2", 00:20:04.091 "trsvcid": "4420" 00:20:04.091 }, 00:20:04.091 "peer_address": { 00:20:04.091 "trtype": "TCP", 00:20:04.091 "adrfam": "IPv4", 00:20:04.091 "traddr": "10.0.0.1", 00:20:04.091 "trsvcid": "51304" 00:20:04.091 }, 00:20:04.091 "auth": { 00:20:04.091 "state": "completed", 00:20:04.091 "digest": "sha256", 00:20:04.091 "dhgroup": "ffdhe3072" 00:20:04.091 } 00:20:04.091 } 00:20:04.091 ]' 00:20:04.091 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.350 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.609 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:04.609 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:05.177 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.178 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.437 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.695 { 00:20:05.695 "cntlid": 25, 00:20:05.695 "qid": 0, 00:20:05.695 "state": "enabled", 00:20:05.695 "thread": "nvmf_tgt_poll_group_000", 00:20:05.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.695 "listen_address": { 00:20:05.695 "trtype": "TCP", 00:20:05.695 "adrfam": "IPv4", 00:20:05.695 "traddr": "10.0.0.2", 00:20:05.695 "trsvcid": "4420" 00:20:05.695 }, 00:20:05.695 "peer_address": { 00:20:05.695 "trtype": "TCP", 00:20:05.695 "adrfam": "IPv4", 00:20:05.695 "traddr": "10.0.0.1", 00:20:05.695 "trsvcid": "51344" 00:20:05.695 }, 00:20:05.695 "auth": { 00:20:05.695 "state": "completed", 00:20:05.695 "digest": "sha256", 00:20:05.695 "dhgroup": "ffdhe4096" 00:20:05.695 } 00:20:05.695 } 00:20:05.695 ]' 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.695 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.953 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.953 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.953 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.953 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.953 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.212 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:06.212 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.778 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.345 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.345 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.345 { 00:20:07.345 "cntlid": 27, 00:20:07.346 "qid": 0, 00:20:07.346 "state": "enabled", 00:20:07.346 "thread": "nvmf_tgt_poll_group_000", 00:20:07.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.346 "listen_address": { 00:20:07.346 "trtype": "TCP", 00:20:07.346 "adrfam": "IPv4", 00:20:07.346 "traddr": "10.0.0.2", 00:20:07.346 "trsvcid": "4420" 00:20:07.346 }, 00:20:07.346 "peer_address": { 00:20:07.346 "trtype": "TCP", 00:20:07.346 "adrfam": "IPv4", 00:20:07.346 "traddr": "10.0.0.1", 00:20:07.346 "trsvcid": "51364" 00:20:07.346 }, 00:20:07.346 "auth": { 00:20:07.346 "state": "completed", 00:20:07.346 "digest": "sha256", 00:20:07.346 "dhgroup": "ffdhe4096" 00:20:07.346 } 00:20:07.346 } 00:20:07.346 ]' 00:20:07.346 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.346 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.346 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.604 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.604 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.604 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.604 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.604 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.863 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:07.863 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.430 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.689 00:20:08.689 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.689 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.689 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.947 { 00:20:08.947 "cntlid": 29, 00:20:08.947 "qid": 0, 00:20:08.947 "state": "enabled", 00:20:08.947 "thread": "nvmf_tgt_poll_group_000", 00:20:08.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.947 "listen_address": { 00:20:08.947 "trtype": "TCP", 00:20:08.947 "adrfam": "IPv4", 00:20:08.947 "traddr": "10.0.0.2", 00:20:08.947 "trsvcid": "4420" 00:20:08.947 }, 00:20:08.947 "peer_address": { 00:20:08.947 "trtype": "TCP", 00:20:08.947 "adrfam": "IPv4", 00:20:08.947 "traddr": "10.0.0.1", 00:20:08.947 "trsvcid": "51386" 00:20:08.947 }, 00:20:08.947 "auth": { 00:20:08.947 "state": "completed", 00:20:08.947 "digest": "sha256", 00:20:08.947 "dhgroup": "ffdhe4096" 00:20:08.947 } 00:20:08.947 } 00:20:08.947 ]' 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.947 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:09.206 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:09.773 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.032 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.291 00:20:10.291 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.291 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.291 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.551 { 00:20:10.551 "cntlid": 31, 00:20:10.551 "qid": 0, 00:20:10.551 "state": "enabled", 00:20:10.551 "thread": "nvmf_tgt_poll_group_000", 00:20:10.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.551 "listen_address": { 00:20:10.551 "trtype": "TCP", 00:20:10.551 "adrfam": "IPv4", 00:20:10.551 "traddr": "10.0.0.2", 00:20:10.551 "trsvcid": "4420" 00:20:10.551 }, 00:20:10.551 "peer_address": { 00:20:10.551 "trtype": "TCP", 00:20:10.551 "adrfam": "IPv4", 00:20:10.551 "traddr": "10.0.0.1", 00:20:10.551 "trsvcid": "51430" 00:20:10.551 }, 00:20:10.551 "auth": { 00:20:10.551 "state": "completed", 00:20:10.551 "digest": "sha256", 00:20:10.551 "dhgroup": "ffdhe4096" 00:20:10.551 } 00:20:10.551 } 00:20:10.551 ]' 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.551 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:10.810 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:11.376 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.634 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.634 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.634 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.634 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.635 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.635 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.635 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.635 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.635 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.200 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.200 { 00:20:12.200 "cntlid": 33, 00:20:12.200 "qid": 0, 00:20:12.200 "state": "enabled", 00:20:12.200 "thread": "nvmf_tgt_poll_group_000", 00:20:12.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.200 "listen_address": { 00:20:12.200 "trtype": "TCP", 00:20:12.200 "adrfam": "IPv4", 00:20:12.200 "traddr": "10.0.0.2", 00:20:12.200 "trsvcid": "4420" 00:20:12.200 }, 00:20:12.200 "peer_address": { 00:20:12.200 "trtype": "TCP", 00:20:12.200 "adrfam": "IPv4", 00:20:12.200 "traddr": "10.0.0.1", 00:20:12.200 "trsvcid": "60580" 00:20:12.200 }, 00:20:12.200 "auth": { 00:20:12.200 "state": "completed", 00:20:12.200 "digest": "sha256", 00:20:12.200 "dhgroup": "ffdhe6144" 00:20:12.200 } 00:20:12.200 } 00:20:12.200 ]' 00:20:12.200 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.459 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.718 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:12.718 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.285 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.544 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.804 00:20:13.804 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.804 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.804 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.063 { 00:20:14.063 "cntlid": 35, 00:20:14.063 "qid": 0, 00:20:14.063 "state": "enabled", 00:20:14.063 "thread": "nvmf_tgt_poll_group_000", 00:20:14.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.063 "listen_address": { 00:20:14.063 "trtype": "TCP", 00:20:14.063 "adrfam": "IPv4", 00:20:14.063 "traddr": "10.0.0.2", 00:20:14.063 "trsvcid": "4420" 00:20:14.063 }, 00:20:14.063 "peer_address": { 00:20:14.063 "trtype": "TCP", 00:20:14.063 "adrfam": "IPv4", 00:20:14.063 "traddr": "10.0.0.1", 00:20:14.063 "trsvcid": "60612" 00:20:14.063 }, 00:20:14.063 "auth": { 00:20:14.063 "state": "completed", 00:20:14.063 "digest": "sha256", 00:20:14.063 "dhgroup": "ffdhe6144" 00:20:14.063 } 00:20:14.063 } 00:20:14.063 ]' 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.063 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.322 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:14.322 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.890 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.149 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.408 00:20:15.408 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.408 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.408 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.667 { 00:20:15.667 "cntlid": 37, 00:20:15.667 "qid": 0, 00:20:15.667 "state": "enabled", 00:20:15.667 "thread": "nvmf_tgt_poll_group_000", 00:20:15.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.667 "listen_address": { 00:20:15.667 "trtype": "TCP", 00:20:15.667 "adrfam": "IPv4", 00:20:15.667 "traddr": "10.0.0.2", 00:20:15.667 "trsvcid": "4420" 00:20:15.667 }, 00:20:15.667 "peer_address": { 00:20:15.667 "trtype": "TCP", 00:20:15.667 "adrfam": "IPv4", 00:20:15.667 "traddr": "10.0.0.1", 00:20:15.667 "trsvcid": "60638" 00:20:15.667 }, 00:20:15.667 "auth": { 00:20:15.667 "state": "completed", 00:20:15.667 "digest": "sha256", 00:20:15.667 "dhgroup": "ffdhe6144" 00:20:15.667 } 00:20:15.667 } 00:20:15.667 ]' 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.667 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.926 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:15.926 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:16.494 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.494 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.752 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.010 00:20:17.010 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.010 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.011 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.269 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.269 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.269 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.269 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.269 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.269 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.269 { 00:20:17.269 "cntlid": 39, 00:20:17.269 "qid": 0, 00:20:17.269 "state": "enabled", 00:20:17.269 "thread": "nvmf_tgt_poll_group_000", 00:20:17.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.269 "listen_address": { 00:20:17.270 "trtype": "TCP", 00:20:17.270 "adrfam": "IPv4", 00:20:17.270 "traddr": "10.0.0.2", 00:20:17.270 "trsvcid": "4420" 00:20:17.270 }, 00:20:17.270 "peer_address": { 00:20:17.270 "trtype": "TCP", 00:20:17.270 "adrfam": "IPv4", 00:20:17.270 "traddr": "10.0.0.1", 00:20:17.270 "trsvcid": "60666" 00:20:17.270 }, 00:20:17.270 "auth": { 00:20:17.270 "state": "completed", 00:20:17.270 "digest": "sha256", 00:20:17.270 "dhgroup": "ffdhe6144" 00:20:17.270 } 00:20:17.270 } 00:20:17.270 ]' 00:20:17.270 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.270 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.270 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.270 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.270 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.528 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.528 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.528 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.528 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:17.528 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.096 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.355 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.923 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.923 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.182 { 00:20:19.182 "cntlid": 41, 00:20:19.182 "qid": 0, 00:20:19.182 "state": "enabled", 00:20:19.182 "thread": "nvmf_tgt_poll_group_000", 00:20:19.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.182 "listen_address": { 00:20:19.182 "trtype": "TCP", 00:20:19.182 "adrfam": "IPv4", 00:20:19.182 "traddr": "10.0.0.2", 00:20:19.182 "trsvcid": "4420" 00:20:19.182 }, 00:20:19.182 "peer_address": { 00:20:19.182 "trtype": "TCP", 00:20:19.182 "adrfam": "IPv4", 00:20:19.182 "traddr": "10.0.0.1", 00:20:19.182 "trsvcid": "60692" 00:20:19.182 }, 00:20:19.182 "auth": { 00:20:19.182 "state": "completed", 00:20:19.182 "digest": "sha256", 00:20:19.182 "dhgroup": "ffdhe8192" 00:20:19.182 } 00:20:19.182 } 00:20:19.182 ]' 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.182 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.441 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:19.441 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.008 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.267 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.526 00:20:20.526 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.526 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.526 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.785 { 00:20:20.785 "cntlid": 43, 00:20:20.785 "qid": 0, 00:20:20.785 "state": "enabled", 00:20:20.785 "thread": "nvmf_tgt_poll_group_000", 00:20:20.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.785 "listen_address": { 00:20:20.785 "trtype": "TCP", 00:20:20.785 "adrfam": "IPv4", 00:20:20.785 "traddr": "10.0.0.2", 00:20:20.785 "trsvcid": "4420" 00:20:20.785 }, 00:20:20.785 "peer_address": { 00:20:20.785 "trtype": "TCP", 00:20:20.785 "adrfam": "IPv4", 00:20:20.785 "traddr": "10.0.0.1", 00:20:20.785 "trsvcid": "60714" 00:20:20.785 }, 00:20:20.785 "auth": { 00:20:20.785 "state": "completed", 00:20:20.785 "digest": "sha256", 00:20:20.785 "dhgroup": "ffdhe8192" 00:20:20.785 } 00:20:20.785 } 00:20:20.785 ]' 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.785 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:21.044 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:21.612 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.612 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:21.612 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.612 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.871 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.439 00:20:22.439 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.439 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.439 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.698 { 00:20:22.698 "cntlid": 45, 00:20:22.698 "qid": 0, 00:20:22.698 "state": "enabled", 00:20:22.698 "thread": "nvmf_tgt_poll_group_000", 00:20:22.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.698 "listen_address": { 00:20:22.698 "trtype": "TCP", 00:20:22.698 "adrfam": "IPv4", 00:20:22.698 "traddr": "10.0.0.2", 00:20:22.698 "trsvcid": "4420" 00:20:22.698 }, 00:20:22.698 "peer_address": { 00:20:22.698 "trtype": "TCP", 00:20:22.698 "adrfam": "IPv4", 00:20:22.698 "traddr": "10.0.0.1", 00:20:22.698 "trsvcid": "37992" 00:20:22.698 }, 00:20:22.698 "auth": { 00:20:22.698 "state": "completed", 00:20:22.698 "digest": "sha256", 00:20:22.698 "dhgroup": "ffdhe8192" 00:20:22.698 } 00:20:22.698 } 00:20:22.698 ]' 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.698 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.957 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:22.957 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:23.523 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.523 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.523 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.523 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.523 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.523 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.523 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.523 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.782 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.350 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.350 { 00:20:24.350 "cntlid": 47, 00:20:24.350 "qid": 0, 00:20:24.350 "state": "enabled", 00:20:24.350 "thread": "nvmf_tgt_poll_group_000", 00:20:24.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.350 "listen_address": { 00:20:24.350 "trtype": "TCP", 00:20:24.350 "adrfam": "IPv4", 00:20:24.350 "traddr": "10.0.0.2", 00:20:24.350 "trsvcid": "4420" 00:20:24.350 }, 00:20:24.350 "peer_address": { 00:20:24.350 "trtype": "TCP", 00:20:24.350 "adrfam": "IPv4", 00:20:24.350 "traddr": "10.0.0.1", 00:20:24.350 "trsvcid": "38006" 00:20:24.350 }, 00:20:24.350 "auth": { 00:20:24.350 "state": "completed", 00:20:24.350 "digest": "sha256", 00:20:24.350 "dhgroup": "ffdhe8192" 00:20:24.350 } 00:20:24.350 } 00:20:24.350 ]' 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.350 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.609 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.609 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.609 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.609 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.609 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.867 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:24.867 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.435 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.435 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.694 00:20:25.694 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.694 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.694 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.953 { 00:20:25.953 "cntlid": 49, 00:20:25.953 "qid": 0, 00:20:25.953 "state": "enabled", 00:20:25.953 "thread": "nvmf_tgt_poll_group_000", 00:20:25.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.953 "listen_address": { 00:20:25.953 "trtype": "TCP", 00:20:25.953 "adrfam": "IPv4", 00:20:25.953 "traddr": "10.0.0.2", 00:20:25.953 "trsvcid": "4420" 00:20:25.953 }, 00:20:25.953 "peer_address": { 00:20:25.953 "trtype": "TCP", 00:20:25.953 "adrfam": "IPv4", 00:20:25.953 "traddr": "10.0.0.1", 00:20:25.953 "trsvcid": "38038" 00:20:25.953 }, 00:20:25.953 "auth": { 00:20:25.953 "state": "completed", 00:20:25.953 "digest": "sha384", 00:20:25.953 "dhgroup": "null" 00:20:25.953 } 00:20:25.953 } 00:20:25.953 ]' 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:25.953 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.212 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.212 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.212 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.212 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:26.212 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.779 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.038 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.297 00:20:27.297 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.297 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.297 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.558 { 00:20:27.558 "cntlid": 51, 00:20:27.558 "qid": 0, 00:20:27.558 "state": "enabled", 00:20:27.558 "thread": "nvmf_tgt_poll_group_000", 00:20:27.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.558 "listen_address": { 00:20:27.558 "trtype": "TCP", 00:20:27.558 "adrfam": "IPv4", 00:20:27.558 "traddr": "10.0.0.2", 00:20:27.558 "trsvcid": "4420" 00:20:27.558 }, 00:20:27.558 "peer_address": { 00:20:27.558 "trtype": "TCP", 00:20:27.558 "adrfam": "IPv4", 00:20:27.558 "traddr": "10.0.0.1", 00:20:27.558 "trsvcid": "38074" 00:20:27.558 }, 00:20:27.558 "auth": { 00:20:27.558 "state": "completed", 00:20:27.558 "digest": "sha384", 00:20:27.558 "dhgroup": "null" 00:20:27.558 } 00:20:27.558 } 00:20:27.558 ]' 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:27.558 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.824 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.824 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.824 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.824 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:27.824 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.477 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.744 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.003 00:20:29.003 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.003 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.003 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.261 { 00:20:29.261 "cntlid": 53, 00:20:29.261 "qid": 0, 00:20:29.261 "state": "enabled", 00:20:29.261 "thread": "nvmf_tgt_poll_group_000", 00:20:29.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.261 "listen_address": { 00:20:29.261 "trtype": "TCP", 00:20:29.261 "adrfam": "IPv4", 00:20:29.261 "traddr": "10.0.0.2", 00:20:29.261 "trsvcid": "4420" 00:20:29.261 }, 00:20:29.261 "peer_address": { 00:20:29.261 "trtype": "TCP", 00:20:29.261 "adrfam": "IPv4", 00:20:29.261 "traddr": "10.0.0.1", 00:20:29.261 "trsvcid": "38100" 00:20:29.261 }, 00:20:29.261 "auth": { 00:20:29.261 "state": "completed", 00:20:29.261 "digest": "sha384", 00:20:29.261 "dhgroup": "null" 00:20:29.261 } 00:20:29.261 } 00:20:29.261 ]' 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.261 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.262 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:29.262 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.262 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.262 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.262 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.520 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:29.520 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:30.087 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.087 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.087 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.087 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.087 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.087 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.088 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.088 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.346 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.605 00:20:30.605 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.605 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.605 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.605 { 00:20:30.605 "cntlid": 55, 00:20:30.605 "qid": 0, 00:20:30.605 "state": "enabled", 00:20:30.605 "thread": "nvmf_tgt_poll_group_000", 00:20:30.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.605 "listen_address": { 00:20:30.605 "trtype": "TCP", 00:20:30.605 "adrfam": "IPv4", 00:20:30.605 "traddr": "10.0.0.2", 00:20:30.605 "trsvcid": "4420" 00:20:30.605 }, 00:20:30.605 "peer_address": { 00:20:30.605 "trtype": "TCP", 00:20:30.605 "adrfam": "IPv4", 00:20:30.605 "traddr": "10.0.0.1", 00:20:30.605 "trsvcid": "38132" 00:20:30.605 }, 00:20:30.605 "auth": { 00:20:30.605 "state": "completed", 00:20:30.605 "digest": "sha384", 00:20:30.605 "dhgroup": "null" 00:20:30.605 } 00:20:30.605 } 00:20:30.605 ]' 00:20:30.605 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.864 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.122 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:31.123 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.690 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.949 00:20:31.949 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.949 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.949 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.207 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.207 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.207 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.207 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.207 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.207 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.207 { 00:20:32.207 "cntlid": 57, 00:20:32.207 "qid": 0, 00:20:32.207 "state": "enabled", 00:20:32.207 "thread": "nvmf_tgt_poll_group_000", 00:20:32.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.207 "listen_address": { 00:20:32.207 "trtype": "TCP", 00:20:32.207 "adrfam": "IPv4", 00:20:32.207 "traddr": "10.0.0.2", 00:20:32.207 "trsvcid": "4420" 00:20:32.207 }, 00:20:32.207 "peer_address": { 00:20:32.208 "trtype": "TCP", 00:20:32.208 "adrfam": "IPv4", 00:20:32.208 "traddr": "10.0.0.1", 00:20:32.208 "trsvcid": "57126" 00:20:32.208 }, 00:20:32.208 "auth": { 00:20:32.208 "state": "completed", 00:20:32.208 "digest": "sha384", 00:20:32.208 "dhgroup": "ffdhe2048" 00:20:32.208 } 00:20:32.208 } 00:20:32.208 ]' 00:20:32.208 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.208 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.208 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.466 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.466 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.466 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.466 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.466 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.725 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:32.725 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.292 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.293 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.293 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.293 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.551 00:20:33.551 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.551 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.551 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.810 { 00:20:33.810 "cntlid": 59, 00:20:33.810 "qid": 0, 00:20:33.810 "state": "enabled", 00:20:33.810 "thread": "nvmf_tgt_poll_group_000", 00:20:33.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.810 "listen_address": { 00:20:33.810 "trtype": "TCP", 00:20:33.810 "adrfam": "IPv4", 00:20:33.810 "traddr": "10.0.0.2", 00:20:33.810 "trsvcid": "4420" 00:20:33.810 }, 00:20:33.810 "peer_address": { 00:20:33.810 "trtype": "TCP", 00:20:33.810 "adrfam": "IPv4", 00:20:33.810 "traddr": "10.0.0.1", 00:20:33.810 "trsvcid": "57174" 00:20:33.810 }, 00:20:33.810 "auth": { 00:20:33.810 "state": "completed", 00:20:33.810 "digest": "sha384", 00:20:33.810 "dhgroup": "ffdhe2048" 00:20:33.810 } 00:20:33.810 } 00:20:33.810 ]' 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.810 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.069 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.069 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.069 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.069 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:34.069 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:34.636 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.895 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.154 00:20:35.154 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.154 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.154 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.412 { 00:20:35.412 "cntlid": 61, 00:20:35.412 "qid": 0, 00:20:35.412 "state": "enabled", 00:20:35.412 "thread": "nvmf_tgt_poll_group_000", 00:20:35.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.412 "listen_address": { 00:20:35.412 "trtype": "TCP", 00:20:35.412 "adrfam": "IPv4", 00:20:35.412 "traddr": "10.0.0.2", 00:20:35.412 "trsvcid": "4420" 00:20:35.412 }, 00:20:35.412 "peer_address": { 00:20:35.412 "trtype": "TCP", 00:20:35.412 "adrfam": "IPv4", 00:20:35.412 "traddr": "10.0.0.1", 00:20:35.412 "trsvcid": "57196" 00:20:35.412 }, 00:20:35.412 "auth": { 00:20:35.412 "state": "completed", 00:20:35.412 "digest": "sha384", 00:20:35.412 "dhgroup": "ffdhe2048" 00:20:35.412 } 00:20:35.412 } 00:20:35.412 ]' 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.412 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.412 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.412 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.672 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.672 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.672 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.672 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:35.672 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:36.238 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.239 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.239 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.239 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.497 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.497 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.497 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.497 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.497 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.756 00:20:36.756 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.756 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.756 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.014 { 00:20:37.014 "cntlid": 63, 00:20:37.014 "qid": 0, 00:20:37.014 "state": "enabled", 00:20:37.014 "thread": "nvmf_tgt_poll_group_000", 00:20:37.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.014 "listen_address": { 00:20:37.014 "trtype": "TCP", 00:20:37.014 "adrfam": "IPv4", 00:20:37.014 "traddr": "10.0.0.2", 00:20:37.014 "trsvcid": "4420" 00:20:37.014 }, 00:20:37.014 "peer_address": { 00:20:37.014 "trtype": "TCP", 00:20:37.014 "adrfam": "IPv4", 00:20:37.014 "traddr": "10.0.0.1", 00:20:37.014 "trsvcid": "57232" 00:20:37.014 }, 00:20:37.014 "auth": { 00:20:37.014 "state": "completed", 00:20:37.014 "digest": "sha384", 00:20:37.014 "dhgroup": "ffdhe2048" 00:20:37.014 } 00:20:37.014 } 00:20:37.014 ]' 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.014 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.273 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.273 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.273 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.273 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:37.273 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.841 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.099 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:38.099 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.099 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.100 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.358 00:20:38.358 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.358 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.358 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.617 { 00:20:38.617 "cntlid": 65, 00:20:38.617 "qid": 0, 00:20:38.617 "state": "enabled", 00:20:38.617 "thread": "nvmf_tgt_poll_group_000", 00:20:38.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.617 "listen_address": { 00:20:38.617 "trtype": "TCP", 00:20:38.617 "adrfam": "IPv4", 00:20:38.617 "traddr": "10.0.0.2", 00:20:38.617 "trsvcid": "4420" 00:20:38.617 }, 00:20:38.617 "peer_address": { 00:20:38.617 "trtype": "TCP", 00:20:38.617 "adrfam": "IPv4", 00:20:38.617 "traddr": "10.0.0.1", 00:20:38.617 "trsvcid": "57262" 00:20:38.617 }, 00:20:38.617 "auth": { 00:20:38.617 "state": "completed", 00:20:38.617 "digest": "sha384", 00:20:38.617 "dhgroup": "ffdhe3072" 00:20:38.617 } 00:20:38.617 } 00:20:38.617 ]' 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.617 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.876 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.876 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.876 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.876 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:38.876 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:39.443 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.702 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.961 00:20:39.961 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.961 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.961 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.220 { 00:20:40.220 "cntlid": 67, 00:20:40.220 "qid": 0, 00:20:40.220 "state": "enabled", 00:20:40.220 "thread": "nvmf_tgt_poll_group_000", 00:20:40.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.220 "listen_address": { 00:20:40.220 "trtype": "TCP", 00:20:40.220 "adrfam": "IPv4", 00:20:40.220 "traddr": "10.0.0.2", 00:20:40.220 "trsvcid": "4420" 00:20:40.220 }, 00:20:40.220 "peer_address": { 00:20:40.220 "trtype": "TCP", 00:20:40.220 "adrfam": "IPv4", 00:20:40.220 "traddr": "10.0.0.1", 00:20:40.220 "trsvcid": "57292" 00:20:40.220 }, 00:20:40.220 "auth": { 00:20:40.220 "state": "completed", 00:20:40.220 "digest": "sha384", 00:20:40.220 "dhgroup": "ffdhe3072" 00:20:40.220 } 00:20:40.220 } 00:20:40.220 ]' 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.220 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.479 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.479 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.479 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.479 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.479 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.479 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:40.479 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:41.049 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.049 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.049 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.049 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.307 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.566 00:20:41.566 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.566 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.566 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.824 { 00:20:41.824 "cntlid": 69, 00:20:41.824 "qid": 0, 00:20:41.824 "state": "enabled", 00:20:41.824 "thread": "nvmf_tgt_poll_group_000", 00:20:41.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.824 "listen_address": { 00:20:41.824 "trtype": "TCP", 00:20:41.824 "adrfam": "IPv4", 00:20:41.824 "traddr": "10.0.0.2", 00:20:41.824 "trsvcid": "4420" 00:20:41.824 }, 00:20:41.824 "peer_address": { 00:20:41.824 "trtype": "TCP", 00:20:41.824 "adrfam": "IPv4", 00:20:41.824 "traddr": "10.0.0.1", 00:20:41.824 "trsvcid": "57316" 00:20:41.824 }, 00:20:41.824 "auth": { 00:20:41.824 "state": "completed", 00:20:41.824 "digest": "sha384", 00:20:41.824 "dhgroup": "ffdhe3072" 00:20:41.824 } 00:20:41.824 } 00:20:41.824 ]' 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.824 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.083 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.083 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.083 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.083 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.083 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.341 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:42.341 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.909 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.168 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.168 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.168 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.168 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.168 00:20:43.426 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.426 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.426 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.426 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.426 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.427 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.427 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.427 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.427 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.427 { 00:20:43.427 "cntlid": 71, 00:20:43.427 "qid": 0, 00:20:43.427 "state": "enabled", 00:20:43.427 "thread": "nvmf_tgt_poll_group_000", 00:20:43.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.427 "listen_address": { 00:20:43.427 "trtype": "TCP", 00:20:43.427 "adrfam": "IPv4", 00:20:43.427 "traddr": "10.0.0.2", 00:20:43.427 "trsvcid": "4420" 00:20:43.427 }, 00:20:43.427 "peer_address": { 00:20:43.427 "trtype": "TCP", 00:20:43.427 "adrfam": "IPv4", 00:20:43.427 "traddr": "10.0.0.1", 00:20:43.427 "trsvcid": "50962" 00:20:43.427 }, 00:20:43.427 "auth": { 00:20:43.427 "state": "completed", 00:20:43.427 "digest": "sha384", 00:20:43.427 "dhgroup": "ffdhe3072" 00:20:43.427 } 00:20:43.427 } 00:20:43.427 ]' 00:20:43.427 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.685 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.942 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:43.942 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.507 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.507 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.508 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.508 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.508 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.508 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.766 00:20:44.766 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.766 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.766 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.024 { 00:20:45.024 "cntlid": 73, 00:20:45.024 "qid": 0, 00:20:45.024 "state": "enabled", 00:20:45.024 "thread": "nvmf_tgt_poll_group_000", 00:20:45.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.024 "listen_address": { 00:20:45.024 "trtype": "TCP", 00:20:45.024 "adrfam": "IPv4", 00:20:45.024 "traddr": "10.0.0.2", 00:20:45.024 "trsvcid": "4420" 00:20:45.024 }, 00:20:45.024 "peer_address": { 00:20:45.024 "trtype": "TCP", 00:20:45.024 "adrfam": "IPv4", 00:20:45.024 "traddr": "10.0.0.1", 00:20:45.024 "trsvcid": "50978" 00:20:45.024 }, 00:20:45.024 "auth": { 00:20:45.024 "state": "completed", 00:20:45.024 "digest": "sha384", 00:20:45.024 "dhgroup": "ffdhe4096" 00:20:45.024 } 00:20:45.024 } 00:20:45.024 ]' 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.024 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.283 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.283 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.283 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.283 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.283 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.283 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.542 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:45.542 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.109 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.368 00:20:46.627 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.627 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.627 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.627 { 00:20:46.627 "cntlid": 75, 00:20:46.627 "qid": 0, 00:20:46.627 "state": "enabled", 00:20:46.627 "thread": "nvmf_tgt_poll_group_000", 00:20:46.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.627 "listen_address": { 00:20:46.627 "trtype": "TCP", 00:20:46.627 "adrfam": "IPv4", 00:20:46.627 "traddr": "10.0.0.2", 00:20:46.627 "trsvcid": "4420" 00:20:46.627 }, 00:20:46.627 "peer_address": { 00:20:46.627 "trtype": "TCP", 00:20:46.627 "adrfam": "IPv4", 00:20:46.627 "traddr": "10.0.0.1", 00:20:46.627 "trsvcid": "51020" 00:20:46.627 }, 00:20:46.627 "auth": { 00:20:46.627 "state": "completed", 00:20:46.627 "digest": "sha384", 00:20:46.627 "dhgroup": "ffdhe4096" 00:20:46.627 } 00:20:46.627 } 00:20:46.627 ]' 00:20:46.627 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.886 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.145 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:47.145 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:47.712 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.712 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.712 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.712 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.712 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.712 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.713 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.713 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.972 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.231 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.231 { 00:20:48.231 "cntlid": 77, 00:20:48.231 "qid": 0, 00:20:48.231 "state": "enabled", 00:20:48.231 "thread": "nvmf_tgt_poll_group_000", 00:20:48.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.231 "listen_address": { 00:20:48.231 "trtype": "TCP", 00:20:48.231 "adrfam": "IPv4", 00:20:48.231 "traddr": "10.0.0.2", 00:20:48.231 "trsvcid": "4420" 00:20:48.231 }, 00:20:48.231 "peer_address": { 00:20:48.231 "trtype": "TCP", 00:20:48.231 "adrfam": "IPv4", 00:20:48.231 "traddr": "10.0.0.1", 00:20:48.231 "trsvcid": "51054" 00:20:48.231 }, 00:20:48.231 "auth": { 00:20:48.231 "state": "completed", 00:20:48.231 "digest": "sha384", 00:20:48.231 "dhgroup": "ffdhe4096" 00:20:48.231 } 00:20:48.231 } 00:20:48.231 ]' 00:20:48.231 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.489 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.748 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:48.748 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.316 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.575 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.833 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.834 { 00:20:49.834 "cntlid": 79, 00:20:49.834 "qid": 0, 00:20:49.834 "state": "enabled", 00:20:49.834 "thread": "nvmf_tgt_poll_group_000", 00:20:49.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.834 "listen_address": { 00:20:49.834 "trtype": "TCP", 00:20:49.834 "adrfam": "IPv4", 00:20:49.834 "traddr": "10.0.0.2", 00:20:49.834 "trsvcid": "4420" 00:20:49.834 }, 00:20:49.834 "peer_address": { 00:20:49.834 "trtype": "TCP", 00:20:49.834 "adrfam": "IPv4", 00:20:49.834 "traddr": "10.0.0.1", 00:20:49.834 "trsvcid": "51094" 00:20:49.834 }, 00:20:49.834 "auth": { 00:20:49.834 "state": "completed", 00:20:49.834 "digest": "sha384", 00:20:49.834 "dhgroup": "ffdhe4096" 00:20:49.834 } 00:20:49.834 } 00:20:49.834 ]' 00:20:49.834 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.093 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.351 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:50.352 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.919 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.487 00:20:51.487 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.487 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.487 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.487 { 00:20:51.487 "cntlid": 81, 00:20:51.487 "qid": 0, 00:20:51.487 "state": "enabled", 00:20:51.487 "thread": "nvmf_tgt_poll_group_000", 00:20:51.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.487 "listen_address": { 00:20:51.487 "trtype": "TCP", 00:20:51.487 "adrfam": "IPv4", 00:20:51.487 "traddr": "10.0.0.2", 00:20:51.487 "trsvcid": "4420" 00:20:51.487 }, 00:20:51.487 "peer_address": { 00:20:51.487 "trtype": "TCP", 00:20:51.487 "adrfam": "IPv4", 00:20:51.487 "traddr": "10.0.0.1", 00:20:51.487 "trsvcid": "51120" 00:20:51.487 }, 00:20:51.487 "auth": { 00:20:51.487 "state": "completed", 00:20:51.487 "digest": "sha384", 00:20:51.487 "dhgroup": "ffdhe6144" 00:20:51.487 } 00:20:51.487 } 00:20:51.487 ]' 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.487 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.746 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.746 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.746 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.746 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.746 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.746 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.005 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:52.005 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.573 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.573 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.142 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.142 { 00:20:53.142 "cntlid": 83, 00:20:53.142 "qid": 0, 00:20:53.142 "state": "enabled", 00:20:53.142 "thread": "nvmf_tgt_poll_group_000", 00:20:53.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.142 "listen_address": { 00:20:53.142 "trtype": "TCP", 00:20:53.142 "adrfam": "IPv4", 00:20:53.142 "traddr": "10.0.0.2", 00:20:53.142 "trsvcid": "4420" 00:20:53.142 }, 00:20:53.142 "peer_address": { 00:20:53.142 "trtype": "TCP", 00:20:53.142 "adrfam": "IPv4", 00:20:53.142 "traddr": "10.0.0.1", 00:20:53.142 "trsvcid": "33052" 00:20:53.142 }, 00:20:53.142 "auth": { 00:20:53.142 "state": "completed", 00:20:53.142 "digest": "sha384", 00:20:53.142 "dhgroup": "ffdhe6144" 00:20:53.142 } 00:20:53.142 } 00:20:53.142 ]' 00:20:53.142 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.400 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.659 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:53.659 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.227 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.228 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.796 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.796 { 00:20:54.796 "cntlid": 85, 00:20:54.796 "qid": 0, 00:20:54.796 "state": "enabled", 00:20:54.796 "thread": "nvmf_tgt_poll_group_000", 00:20:54.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.796 "listen_address": { 00:20:54.796 "trtype": "TCP", 00:20:54.796 "adrfam": "IPv4", 00:20:54.796 "traddr": "10.0.0.2", 00:20:54.796 "trsvcid": "4420" 00:20:54.796 }, 00:20:54.796 "peer_address": { 00:20:54.796 "trtype": "TCP", 00:20:54.796 "adrfam": "IPv4", 00:20:54.796 "traddr": "10.0.0.1", 00:20:54.796 "trsvcid": "33086" 00:20:54.796 }, 00:20:54.796 "auth": { 00:20:54.796 "state": "completed", 00:20:54.796 "digest": "sha384", 00:20:54.796 "dhgroup": "ffdhe6144" 00:20:54.796 } 00:20:54.796 } 00:20:54.796 ]' 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.796 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.055 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.055 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.055 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.055 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.055 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.313 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:55.313 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:55.880 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.881 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.447 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.447 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.447 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.447 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.447 { 00:20:56.447 "cntlid": 87, 00:20:56.447 "qid": 0, 00:20:56.447 "state": "enabled", 00:20:56.447 "thread": "nvmf_tgt_poll_group_000", 00:20:56.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.447 "listen_address": { 00:20:56.447 "trtype": "TCP", 00:20:56.447 "adrfam": "IPv4", 00:20:56.447 "traddr": "10.0.0.2", 00:20:56.447 "trsvcid": "4420" 00:20:56.447 }, 00:20:56.447 "peer_address": { 00:20:56.447 "trtype": "TCP", 00:20:56.447 "adrfam": "IPv4", 00:20:56.447 "traddr": "10.0.0.1", 00:20:56.447 "trsvcid": "33130" 00:20:56.447 }, 00:20:56.447 "auth": { 00:20:56.447 "state": "completed", 00:20:56.447 "digest": "sha384", 00:20:56.447 "dhgroup": "ffdhe6144" 00:20:56.447 } 00:20:56.447 } 00:20:56.447 ]' 00:20:56.447 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.447 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.447 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.706 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:56.706 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.706 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.706 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.706 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.964 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:56.964 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.531 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.531 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.789 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.789 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.789 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.790 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.048 00:20:58.048 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.048 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.048 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.307 { 00:20:58.307 "cntlid": 89, 00:20:58.307 "qid": 0, 00:20:58.307 "state": "enabled", 00:20:58.307 "thread": "nvmf_tgt_poll_group_000", 00:20:58.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.307 "listen_address": { 00:20:58.307 "trtype": "TCP", 00:20:58.307 "adrfam": "IPv4", 00:20:58.307 "traddr": "10.0.0.2", 00:20:58.307 "trsvcid": "4420" 00:20:58.307 }, 00:20:58.307 "peer_address": { 00:20:58.307 "trtype": "TCP", 00:20:58.307 "adrfam": "IPv4", 00:20:58.307 "traddr": "10.0.0.1", 00:20:58.307 "trsvcid": "33160" 00:20:58.307 }, 00:20:58.307 "auth": { 00:20:58.307 "state": "completed", 00:20:58.307 "digest": "sha384", 00:20:58.307 "dhgroup": "ffdhe8192" 00:20:58.307 } 00:20:58.307 } 00:20:58.307 ]' 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.307 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.566 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.566 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.566 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.566 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:58.566 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:20:59.133 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.133 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.134 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.134 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.134 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.134 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.134 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.134 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.392 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.960 00:20:59.960 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.960 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.960 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.219 { 00:21:00.219 "cntlid": 91, 00:21:00.219 "qid": 0, 00:21:00.219 "state": "enabled", 00:21:00.219 "thread": "nvmf_tgt_poll_group_000", 00:21:00.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.219 "listen_address": { 00:21:00.219 "trtype": "TCP", 00:21:00.219 "adrfam": "IPv4", 00:21:00.219 "traddr": "10.0.0.2", 00:21:00.219 "trsvcid": "4420" 00:21:00.219 }, 00:21:00.219 "peer_address": { 00:21:00.219 "trtype": "TCP", 00:21:00.219 "adrfam": "IPv4", 00:21:00.219 "traddr": "10.0.0.1", 00:21:00.219 "trsvcid": "33170" 00:21:00.219 }, 00:21:00.219 "auth": { 00:21:00.219 "state": "completed", 00:21:00.219 "digest": "sha384", 00:21:00.219 "dhgroup": "ffdhe8192" 00:21:00.219 } 00:21:00.219 } 00:21:00.219 ]' 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.219 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.478 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:00.478 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.045 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.303 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.304 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.304 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.304 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.871 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.871 { 00:21:01.871 "cntlid": 93, 00:21:01.871 "qid": 0, 00:21:01.871 "state": "enabled", 00:21:01.871 "thread": "nvmf_tgt_poll_group_000", 00:21:01.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.871 "listen_address": { 00:21:01.871 "trtype": "TCP", 00:21:01.871 "adrfam": "IPv4", 00:21:01.871 "traddr": "10.0.0.2", 00:21:01.871 "trsvcid": "4420" 00:21:01.871 }, 00:21:01.871 "peer_address": { 00:21:01.871 "trtype": "TCP", 00:21:01.871 "adrfam": "IPv4", 00:21:01.871 "traddr": "10.0.0.1", 00:21:01.871 "trsvcid": "33200" 00:21:01.871 }, 00:21:01.871 "auth": { 00:21:01.871 "state": "completed", 00:21:01.871 "digest": "sha384", 00:21:01.871 "dhgroup": "ffdhe8192" 00:21:01.871 } 00:21:01.871 } 00:21:01.871 ]' 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.871 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:02.130 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:02.697 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.697 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.697 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.697 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.955 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.522 00:21:03.522 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.522 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.522 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.781 { 00:21:03.781 "cntlid": 95, 00:21:03.781 "qid": 0, 00:21:03.781 "state": "enabled", 00:21:03.781 "thread": "nvmf_tgt_poll_group_000", 00:21:03.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.781 "listen_address": { 00:21:03.781 "trtype": "TCP", 00:21:03.781 "adrfam": "IPv4", 00:21:03.781 "traddr": "10.0.0.2", 00:21:03.781 "trsvcid": "4420" 00:21:03.781 }, 00:21:03.781 "peer_address": { 00:21:03.781 "trtype": "TCP", 00:21:03.781 "adrfam": "IPv4", 00:21:03.781 "traddr": "10.0.0.1", 00:21:03.781 "trsvcid": "47508" 00:21:03.781 }, 00:21:03.781 "auth": { 00:21:03.781 "state": "completed", 00:21:03.781 "digest": "sha384", 00:21:03.781 "dhgroup": "ffdhe8192" 00:21:03.781 } 00:21:03.781 } 00:21:03.781 ]' 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.781 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.040 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:04.040 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.607 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.865 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.167 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.167 { 00:21:05.167 "cntlid": 97, 00:21:05.167 "qid": 0, 00:21:05.167 "state": "enabled", 00:21:05.167 "thread": "nvmf_tgt_poll_group_000", 00:21:05.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.167 "listen_address": { 00:21:05.167 "trtype": "TCP", 00:21:05.167 "adrfam": "IPv4", 00:21:05.167 "traddr": "10.0.0.2", 00:21:05.167 "trsvcid": "4420" 00:21:05.167 }, 00:21:05.167 "peer_address": { 00:21:05.167 "trtype": "TCP", 00:21:05.167 "adrfam": "IPv4", 00:21:05.167 "traddr": "10.0.0.1", 00:21:05.167 "trsvcid": "47542" 00:21:05.167 }, 00:21:05.167 "auth": { 00:21:05.167 "state": "completed", 00:21:05.167 "digest": "sha512", 00:21:05.167 "dhgroup": "null" 00:21:05.167 } 00:21:05.167 } 00:21:05.167 ]' 00:21:05.167 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.500 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.500 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:05.500 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.068 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.327 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.586 00:21:06.586 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.586 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.586 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.844 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.844 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.844 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.844 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.844 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.844 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.844 { 00:21:06.844 "cntlid": 99, 00:21:06.844 "qid": 0, 00:21:06.844 "state": "enabled", 00:21:06.844 "thread": "nvmf_tgt_poll_group_000", 00:21:06.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.844 "listen_address": { 00:21:06.844 "trtype": "TCP", 00:21:06.844 "adrfam": "IPv4", 00:21:06.844 "traddr": "10.0.0.2", 00:21:06.844 "trsvcid": "4420" 00:21:06.844 }, 00:21:06.844 "peer_address": { 00:21:06.844 "trtype": "TCP", 00:21:06.845 "adrfam": "IPv4", 00:21:06.845 "traddr": "10.0.0.1", 00:21:06.845 "trsvcid": "47568" 00:21:06.845 }, 00:21:06.845 "auth": { 00:21:06.845 "state": "completed", 00:21:06.845 "digest": "sha512", 00:21:06.845 "dhgroup": "null" 00:21:06.845 } 00:21:06.845 } 00:21:06.845 ]' 00:21:06.845 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.845 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.845 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.845 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.845 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.103 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.103 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.103 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.103 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:07.103 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.671 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.930 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.189 00:21:08.189 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.189 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.189 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.447 { 00:21:08.447 "cntlid": 101, 00:21:08.447 "qid": 0, 00:21:08.447 "state": "enabled", 00:21:08.447 "thread": "nvmf_tgt_poll_group_000", 00:21:08.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.447 "listen_address": { 00:21:08.447 "trtype": "TCP", 00:21:08.447 "adrfam": "IPv4", 00:21:08.447 "traddr": "10.0.0.2", 00:21:08.447 "trsvcid": "4420" 00:21:08.447 }, 00:21:08.447 "peer_address": { 00:21:08.447 "trtype": "TCP", 00:21:08.447 "adrfam": "IPv4", 00:21:08.447 "traddr": "10.0.0.1", 00:21:08.447 "trsvcid": "47598" 00:21:08.447 }, 00:21:08.447 "auth": { 00:21:08.447 "state": "completed", 00:21:08.447 "digest": "sha512", 00:21:08.447 "dhgroup": "null" 00:21:08.447 } 00:21:08.447 } 00:21:08.447 ]' 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.447 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.447 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.447 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.447 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.706 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:08.706 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.273 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.532 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.791 00:21:09.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.050 { 00:21:10.050 "cntlid": 103, 00:21:10.050 "qid": 0, 00:21:10.050 "state": "enabled", 00:21:10.050 "thread": "nvmf_tgt_poll_group_000", 00:21:10.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.050 "listen_address": { 00:21:10.050 "trtype": "TCP", 00:21:10.050 "adrfam": "IPv4", 00:21:10.050 "traddr": "10.0.0.2", 00:21:10.050 "trsvcid": "4420" 00:21:10.050 }, 00:21:10.050 "peer_address": { 00:21:10.050 "trtype": "TCP", 00:21:10.050 "adrfam": "IPv4", 00:21:10.050 "traddr": "10.0.0.1", 00:21:10.050 "trsvcid": "47618" 00:21:10.050 }, 00:21:10.050 "auth": { 00:21:10.050 "state": "completed", 00:21:10.050 "digest": "sha512", 00:21:10.050 "dhgroup": "null" 00:21:10.050 } 00:21:10.050 } 00:21:10.050 ]' 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.050 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.308 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:10.308 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.876 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.135 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.393 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.393 { 00:21:11.393 "cntlid": 105, 00:21:11.393 "qid": 0, 00:21:11.393 "state": "enabled", 00:21:11.393 "thread": "nvmf_tgt_poll_group_000", 00:21:11.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.393 "listen_address": { 00:21:11.393 "trtype": "TCP", 00:21:11.393 "adrfam": "IPv4", 00:21:11.393 "traddr": "10.0.0.2", 00:21:11.393 "trsvcid": "4420" 00:21:11.393 }, 00:21:11.393 "peer_address": { 00:21:11.393 "trtype": "TCP", 00:21:11.393 "adrfam": "IPv4", 00:21:11.393 "traddr": "10.0.0.1", 00:21:11.393 "trsvcid": "47638" 00:21:11.393 }, 00:21:11.393 "auth": { 00:21:11.393 "state": "completed", 00:21:11.393 "digest": "sha512", 00:21:11.393 "dhgroup": "ffdhe2048" 00:21:11.393 } 00:21:11.393 } 00:21:11.393 ]' 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.393 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.652 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.652 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.652 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.652 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.652 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.911 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:11.911 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.479 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.479 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.737 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.737 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.737 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.737 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.737 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.996 { 00:21:12.996 "cntlid": 107, 00:21:12.996 "qid": 0, 00:21:12.996 "state": "enabled", 00:21:12.996 "thread": "nvmf_tgt_poll_group_000", 00:21:12.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.996 "listen_address": { 00:21:12.996 "trtype": "TCP", 00:21:12.996 "adrfam": "IPv4", 00:21:12.996 "traddr": "10.0.0.2", 00:21:12.996 "trsvcid": "4420" 00:21:12.996 }, 00:21:12.996 "peer_address": { 00:21:12.996 "trtype": "TCP", 00:21:12.996 "adrfam": "IPv4", 00:21:12.996 "traddr": "10.0.0.1", 00:21:12.996 "trsvcid": "50312" 00:21:12.996 }, 00:21:12.996 "auth": { 00:21:12.996 "state": "completed", 00:21:12.996 "digest": "sha512", 00:21:12.996 "dhgroup": "ffdhe2048" 00:21:12.996 } 00:21:12.996 } 00:21:12.996 ]' 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.996 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.255 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.255 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.255 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.255 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.255 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.514 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:13.514 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:14.081 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.082 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.341 00:21:14.341 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.341 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.341 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.599 { 00:21:14.599 "cntlid": 109, 00:21:14.599 "qid": 0, 00:21:14.599 "state": "enabled", 00:21:14.599 "thread": "nvmf_tgt_poll_group_000", 00:21:14.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.599 "listen_address": { 00:21:14.599 "trtype": "TCP", 00:21:14.599 "adrfam": "IPv4", 00:21:14.599 "traddr": "10.0.0.2", 00:21:14.599 "trsvcid": "4420" 00:21:14.599 }, 00:21:14.599 "peer_address": { 00:21:14.599 "trtype": "TCP", 00:21:14.599 "adrfam": "IPv4", 00:21:14.599 "traddr": "10.0.0.1", 00:21:14.599 "trsvcid": "50336" 00:21:14.599 }, 00:21:14.599 "auth": { 00:21:14.599 "state": "completed", 00:21:14.599 "digest": "sha512", 00:21:14.599 "dhgroup": "ffdhe2048" 00:21:14.599 } 00:21:14.599 } 00:21:14.599 ]' 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.599 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.858 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.858 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.858 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.858 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.858 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.116 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:15.116 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.683 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.684 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:15.684 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.684 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.944 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.944 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.944 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.944 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.944 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.203 { 00:21:16.203 "cntlid": 111, 00:21:16.203 "qid": 0, 00:21:16.203 "state": "enabled", 00:21:16.203 "thread": "nvmf_tgt_poll_group_000", 00:21:16.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.203 "listen_address": { 00:21:16.203 "trtype": "TCP", 00:21:16.203 "adrfam": "IPv4", 00:21:16.203 "traddr": "10.0.0.2", 00:21:16.203 "trsvcid": "4420" 00:21:16.203 }, 00:21:16.203 "peer_address": { 00:21:16.203 "trtype": "TCP", 00:21:16.203 "adrfam": "IPv4", 00:21:16.203 "traddr": "10.0.0.1", 00:21:16.203 "trsvcid": "50366" 00:21:16.203 }, 00:21:16.203 "auth": { 00:21:16.203 "state": "completed", 00:21:16.203 "digest": "sha512", 00:21:16.203 "dhgroup": "ffdhe2048" 00:21:16.203 } 00:21:16.203 } 00:21:16.203 ]' 00:21:16.203 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.462 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.720 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:16.720 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:17.286 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.286 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.286 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.286 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.286 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.287 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.545 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.545 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.545 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.545 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.545 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.804 { 00:21:17.804 "cntlid": 113, 00:21:17.804 "qid": 0, 00:21:17.804 "state": "enabled", 00:21:17.804 "thread": "nvmf_tgt_poll_group_000", 00:21:17.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.804 "listen_address": { 00:21:17.804 "trtype": "TCP", 00:21:17.804 "adrfam": "IPv4", 00:21:17.804 "traddr": "10.0.0.2", 00:21:17.804 "trsvcid": "4420" 00:21:17.804 }, 00:21:17.804 "peer_address": { 00:21:17.804 "trtype": "TCP", 00:21:17.804 "adrfam": "IPv4", 00:21:17.804 "traddr": "10.0.0.1", 00:21:17.804 "trsvcid": "50388" 00:21:17.804 }, 00:21:17.804 "auth": { 00:21:17.804 "state": "completed", 00:21:17.804 "digest": "sha512", 00:21:17.804 "dhgroup": "ffdhe3072" 00:21:17.804 } 00:21:17.804 } 00:21:17.804 ]' 00:21:17.804 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.063 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.322 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:18.322 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.890 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.148 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:19.148 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.148 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.148 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.148 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.149 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.407 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.407 { 00:21:19.407 "cntlid": 115, 00:21:19.407 "qid": 0, 00:21:19.407 "state": "enabled", 00:21:19.407 "thread": "nvmf_tgt_poll_group_000", 00:21:19.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.407 "listen_address": { 00:21:19.407 "trtype": "TCP", 00:21:19.407 "adrfam": "IPv4", 00:21:19.407 "traddr": "10.0.0.2", 00:21:19.407 "trsvcid": "4420" 00:21:19.407 }, 00:21:19.407 "peer_address": { 00:21:19.407 "trtype": "TCP", 00:21:19.407 "adrfam": "IPv4", 00:21:19.407 "traddr": "10.0.0.1", 00:21:19.407 "trsvcid": "50410" 00:21:19.407 }, 00:21:19.407 "auth": { 00:21:19.407 "state": "completed", 00:21:19.407 "digest": "sha512", 00:21:19.407 "dhgroup": "ffdhe3072" 00:21:19.407 } 00:21:19.407 } 00:21:19.407 ]' 00:21:19.407 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.665 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.923 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:19.923 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.491 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.491 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.750 00:21:20.750 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.750 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.750 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.009 { 00:21:21.009 "cntlid": 117, 00:21:21.009 "qid": 0, 00:21:21.009 "state": "enabled", 00:21:21.009 "thread": "nvmf_tgt_poll_group_000", 00:21:21.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.009 "listen_address": { 00:21:21.009 "trtype": "TCP", 00:21:21.009 "adrfam": "IPv4", 00:21:21.009 "traddr": "10.0.0.2", 00:21:21.009 "trsvcid": "4420" 00:21:21.009 }, 00:21:21.009 "peer_address": { 00:21:21.009 "trtype": "TCP", 00:21:21.009 "adrfam": "IPv4", 00:21:21.009 "traddr": "10.0.0.1", 00:21:21.009 "trsvcid": "50440" 00:21:21.009 }, 00:21:21.009 "auth": { 00:21:21.009 "state": "completed", 00:21:21.009 "digest": "sha512", 00:21:21.009 "dhgroup": "ffdhe3072" 00:21:21.009 } 00:21:21.009 } 00:21:21.009 ]' 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.009 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.268 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.268 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.268 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.268 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.268 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.526 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:21.526 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.094 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.353 00:21:22.353 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.353 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.353 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.612 { 00:21:22.612 "cntlid": 119, 00:21:22.612 "qid": 0, 00:21:22.612 "state": "enabled", 00:21:22.612 "thread": "nvmf_tgt_poll_group_000", 00:21:22.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.612 "listen_address": { 00:21:22.612 "trtype": "TCP", 00:21:22.612 "adrfam": "IPv4", 00:21:22.612 "traddr": "10.0.0.2", 00:21:22.612 "trsvcid": "4420" 00:21:22.612 }, 00:21:22.612 "peer_address": { 00:21:22.612 "trtype": "TCP", 00:21:22.612 "adrfam": "IPv4", 00:21:22.612 "traddr": "10.0.0.1", 00:21:22.612 "trsvcid": "51354" 00:21:22.612 }, 00:21:22.612 "auth": { 00:21:22.612 "state": "completed", 00:21:22.612 "digest": "sha512", 00:21:22.612 "dhgroup": "ffdhe3072" 00:21:22.612 } 00:21:22.612 } 00:21:22.612 ]' 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.612 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.871 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.871 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.871 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.871 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:22.871 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.438 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.696 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:23.696 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.696 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.697 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.955 00:21:23.955 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.955 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.955 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.214 { 00:21:24.214 "cntlid": 121, 00:21:24.214 "qid": 0, 00:21:24.214 "state": "enabled", 00:21:24.214 "thread": "nvmf_tgt_poll_group_000", 00:21:24.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.214 "listen_address": { 00:21:24.214 "trtype": "TCP", 00:21:24.214 "adrfam": "IPv4", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "trsvcid": "4420" 00:21:24.214 }, 00:21:24.214 "peer_address": { 00:21:24.214 "trtype": "TCP", 00:21:24.214 "adrfam": "IPv4", 00:21:24.214 "traddr": "10.0.0.1", 00:21:24.214 "trsvcid": "51386" 00:21:24.214 }, 00:21:24.214 "auth": { 00:21:24.214 "state": "completed", 00:21:24.214 "digest": "sha512", 00:21:24.214 "dhgroup": "ffdhe4096" 00:21:24.214 } 00:21:24.214 } 00:21:24.214 ]' 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.214 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.472 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.472 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.472 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.472 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:24.472 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.038 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.297 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.556 00:21:25.556 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.556 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.556 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.815 { 00:21:25.815 "cntlid": 123, 00:21:25.815 "qid": 0, 00:21:25.815 "state": "enabled", 00:21:25.815 "thread": "nvmf_tgt_poll_group_000", 00:21:25.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.815 "listen_address": { 00:21:25.815 "trtype": "TCP", 00:21:25.815 "adrfam": "IPv4", 00:21:25.815 "traddr": "10.0.0.2", 00:21:25.815 "trsvcid": "4420" 00:21:25.815 }, 00:21:25.815 "peer_address": { 00:21:25.815 "trtype": "TCP", 00:21:25.815 "adrfam": "IPv4", 00:21:25.815 "traddr": "10.0.0.1", 00:21:25.815 "trsvcid": "51402" 00:21:25.815 }, 00:21:25.815 "auth": { 00:21:25.815 "state": "completed", 00:21:25.815 "digest": "sha512", 00:21:25.815 "dhgroup": "ffdhe4096" 00:21:25.815 } 00:21:25.815 } 00:21:25.815 ]' 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.815 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.074 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:26.074 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:26.640 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.640 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.640 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.640 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.640 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.641 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.641 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.641 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.899 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.157 00:21:27.157 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.157 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.157 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.416 { 00:21:27.416 "cntlid": 125, 00:21:27.416 "qid": 0, 00:21:27.416 "state": "enabled", 00:21:27.416 "thread": "nvmf_tgt_poll_group_000", 00:21:27.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.416 "listen_address": { 00:21:27.416 "trtype": "TCP", 00:21:27.416 "adrfam": "IPv4", 00:21:27.416 "traddr": "10.0.0.2", 00:21:27.416 "trsvcid": "4420" 00:21:27.416 }, 00:21:27.416 "peer_address": { 00:21:27.416 "trtype": "TCP", 00:21:27.416 "adrfam": "IPv4", 00:21:27.416 "traddr": "10.0.0.1", 00:21:27.416 "trsvcid": "51428" 00:21:27.416 }, 00:21:27.416 "auth": { 00:21:27.416 "state": "completed", 00:21:27.416 "digest": "sha512", 00:21:27.416 "dhgroup": "ffdhe4096" 00:21:27.416 } 00:21:27.416 } 00:21:27.416 ]' 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.416 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.416 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.416 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.416 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.675 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:27.675 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.242 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.501 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.760 00:21:28.760 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.760 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.760 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.019 { 00:21:29.019 "cntlid": 127, 00:21:29.019 "qid": 0, 00:21:29.019 "state": "enabled", 00:21:29.019 "thread": "nvmf_tgt_poll_group_000", 00:21:29.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.019 "listen_address": { 00:21:29.019 "trtype": "TCP", 00:21:29.019 "adrfam": "IPv4", 00:21:29.019 "traddr": "10.0.0.2", 00:21:29.019 "trsvcid": "4420" 00:21:29.019 }, 00:21:29.019 "peer_address": { 00:21:29.019 "trtype": "TCP", 00:21:29.019 "adrfam": "IPv4", 00:21:29.019 "traddr": "10.0.0.1", 00:21:29.019 "trsvcid": "51448" 00:21:29.019 }, 00:21:29.019 "auth": { 00:21:29.019 "state": "completed", 00:21:29.019 "digest": "sha512", 00:21:29.019 "dhgroup": "ffdhe4096" 00:21:29.019 } 00:21:29.019 } 00:21:29.019 ]' 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.019 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.278 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.278 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.278 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.278 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:29.278 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.845 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.104 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.363 00:21:30.623 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.623 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.623 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.623 { 00:21:30.623 "cntlid": 129, 00:21:30.623 "qid": 0, 00:21:30.623 "state": "enabled", 00:21:30.623 "thread": "nvmf_tgt_poll_group_000", 00:21:30.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.623 "listen_address": { 00:21:30.623 "trtype": "TCP", 00:21:30.623 "adrfam": "IPv4", 00:21:30.623 "traddr": "10.0.0.2", 00:21:30.623 "trsvcid": "4420" 00:21:30.623 }, 00:21:30.623 "peer_address": { 00:21:30.623 "trtype": "TCP", 00:21:30.623 "adrfam": "IPv4", 00:21:30.623 "traddr": "10.0.0.1", 00:21:30.623 "trsvcid": "51470" 00:21:30.623 }, 00:21:30.623 "auth": { 00:21:30.623 "state": "completed", 00:21:30.623 "digest": "sha512", 00:21:30.623 "dhgroup": "ffdhe6144" 00:21:30.623 } 00:21:30.623 } 00:21:30.623 ]' 00:21:30.623 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.881 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.140 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:31.140 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.707 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.274 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.275 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.533 { 00:21:32.533 "cntlid": 131, 00:21:32.533 "qid": 0, 00:21:32.533 "state": "enabled", 00:21:32.533 "thread": "nvmf_tgt_poll_group_000", 00:21:32.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.533 "listen_address": { 00:21:32.533 "trtype": "TCP", 00:21:32.533 "adrfam": "IPv4", 00:21:32.533 "traddr": "10.0.0.2", 00:21:32.533 "trsvcid": "4420" 00:21:32.533 }, 00:21:32.533 "peer_address": { 00:21:32.533 "trtype": "TCP", 00:21:32.533 "adrfam": "IPv4", 00:21:32.533 "traddr": "10.0.0.1", 00:21:32.533 "trsvcid": "55864" 00:21:32.533 }, 00:21:32.533 "auth": { 00:21:32.533 "state": "completed", 00:21:32.533 "digest": "sha512", 00:21:32.533 "dhgroup": "ffdhe6144" 00:21:32.533 } 00:21:32.533 } 00:21:32.533 ]' 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.533 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.533 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.533 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.792 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:32.792 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.360 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.618 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.618 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.618 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.618 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.877 00:21:33.877 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.877 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.877 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.136 { 00:21:34.136 "cntlid": 133, 00:21:34.136 "qid": 0, 00:21:34.136 "state": "enabled", 00:21:34.136 "thread": "nvmf_tgt_poll_group_000", 00:21:34.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.136 "listen_address": { 00:21:34.136 "trtype": "TCP", 00:21:34.136 "adrfam": "IPv4", 00:21:34.136 "traddr": "10.0.0.2", 00:21:34.136 "trsvcid": "4420" 00:21:34.136 }, 00:21:34.136 "peer_address": { 00:21:34.136 "trtype": "TCP", 00:21:34.136 "adrfam": "IPv4", 00:21:34.136 "traddr": "10.0.0.1", 00:21:34.136 "trsvcid": "55896" 00:21:34.136 }, 00:21:34.136 "auth": { 00:21:34.136 "state": "completed", 00:21:34.136 "digest": "sha512", 00:21:34.136 "dhgroup": "ffdhe6144" 00:21:34.136 } 00:21:34.136 } 00:21:34.136 ]' 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.136 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.395 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:34.395 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.962 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.221 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.480 00:21:35.480 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.480 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.480 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.738 { 00:21:35.738 "cntlid": 135, 00:21:35.738 "qid": 0, 00:21:35.738 "state": "enabled", 00:21:35.738 "thread": "nvmf_tgt_poll_group_000", 00:21:35.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.738 "listen_address": { 00:21:35.738 "trtype": "TCP", 00:21:35.738 "adrfam": "IPv4", 00:21:35.738 "traddr": "10.0.0.2", 00:21:35.738 "trsvcid": "4420" 00:21:35.738 }, 00:21:35.738 "peer_address": { 00:21:35.738 "trtype": "TCP", 00:21:35.738 "adrfam": "IPv4", 00:21:35.738 "traddr": "10.0.0.1", 00:21:35.738 "trsvcid": "55922" 00:21:35.738 }, 00:21:35.738 "auth": { 00:21:35.738 "state": "completed", 00:21:35.738 "digest": "sha512", 00:21:35.738 "dhgroup": "ffdhe6144" 00:21:35.738 } 00:21:35.738 } 00:21:35.738 ]' 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.738 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.997 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:35.997 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.564 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.822 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.823 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.823 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.390 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.390 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.648 { 00:21:37.648 "cntlid": 137, 00:21:37.648 "qid": 0, 00:21:37.648 "state": "enabled", 00:21:37.648 "thread": "nvmf_tgt_poll_group_000", 00:21:37.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.648 "listen_address": { 00:21:37.648 "trtype": "TCP", 00:21:37.648 "adrfam": "IPv4", 00:21:37.648 "traddr": "10.0.0.2", 00:21:37.648 "trsvcid": "4420" 00:21:37.648 }, 00:21:37.648 "peer_address": { 00:21:37.648 "trtype": "TCP", 00:21:37.648 "adrfam": "IPv4", 00:21:37.648 "traddr": "10.0.0.1", 00:21:37.648 "trsvcid": "55948" 00:21:37.648 }, 00:21:37.648 "auth": { 00:21:37.648 "state": "completed", 00:21:37.648 "digest": "sha512", 00:21:37.648 "dhgroup": "ffdhe8192" 00:21:37.648 } 00:21:37.648 } 00:21:37.648 ]' 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.648 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.907 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:37.907 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.474 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.475 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.733 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.734 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.992 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.251 { 00:21:39.251 "cntlid": 139, 00:21:39.251 "qid": 0, 00:21:39.251 "state": "enabled", 00:21:39.251 "thread": "nvmf_tgt_poll_group_000", 00:21:39.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.251 "listen_address": { 00:21:39.251 "trtype": "TCP", 00:21:39.251 "adrfam": "IPv4", 00:21:39.251 "traddr": "10.0.0.2", 00:21:39.251 "trsvcid": "4420" 00:21:39.251 }, 00:21:39.251 "peer_address": { 00:21:39.251 "trtype": "TCP", 00:21:39.251 "adrfam": "IPv4", 00:21:39.251 "traddr": "10.0.0.1", 00:21:39.251 "trsvcid": "55960" 00:21:39.251 }, 00:21:39.251 "auth": { 00:21:39.251 "state": "completed", 00:21:39.251 "digest": "sha512", 00:21:39.251 "dhgroup": "ffdhe8192" 00:21:39.251 } 00:21:39.251 } 00:21:39.251 ]' 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.251 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.510 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.510 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.510 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.510 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.510 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.769 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:39.769 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: --dhchap-ctrl-secret DHHC-1:02:NTVkZDVkMmIzZWNmMTdmZjAxOTcxYzRlZDlmYjRjYTcwNzJmYmMzOWJlOGJkODNjA01hCQ==: 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.336 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.337 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.904 00:21:40.904 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.904 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.904 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.162 { 00:21:41.162 "cntlid": 141, 00:21:41.162 "qid": 0, 00:21:41.162 "state": "enabled", 00:21:41.162 "thread": "nvmf_tgt_poll_group_000", 00:21:41.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.162 "listen_address": { 00:21:41.162 "trtype": "TCP", 00:21:41.162 "adrfam": "IPv4", 00:21:41.162 "traddr": "10.0.0.2", 00:21:41.162 "trsvcid": "4420" 00:21:41.162 }, 00:21:41.162 "peer_address": { 00:21:41.162 "trtype": "TCP", 00:21:41.162 "adrfam": "IPv4", 00:21:41.162 "traddr": "10.0.0.1", 00:21:41.162 "trsvcid": "55972" 00:21:41.162 }, 00:21:41.162 "auth": { 00:21:41.162 "state": "completed", 00:21:41.162 "digest": "sha512", 00:21:41.162 "dhgroup": "ffdhe8192" 00:21:41.162 } 00:21:41.162 } 00:21:41.162 ]' 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.162 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.422 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:41.422 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:01:M2U1ZThiNmIzNDc0NzRiMTI0ZDlkODNkOThiYzc4YWagEhCw: 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.989 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.311 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.930 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.930 { 00:21:42.930 "cntlid": 143, 00:21:42.930 "qid": 0, 00:21:42.930 "state": "enabled", 00:21:42.930 "thread": "nvmf_tgt_poll_group_000", 00:21:42.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.930 "listen_address": { 00:21:42.930 "trtype": "TCP", 00:21:42.930 "adrfam": "IPv4", 00:21:42.930 "traddr": "10.0.0.2", 00:21:42.930 "trsvcid": "4420" 00:21:42.930 }, 00:21:42.930 "peer_address": { 00:21:42.930 "trtype": "TCP", 00:21:42.930 "adrfam": "IPv4", 00:21:42.930 "traddr": "10.0.0.1", 00:21:42.930 "trsvcid": "51332" 00:21:42.930 }, 00:21:42.930 "auth": { 00:21:42.930 "state": "completed", 00:21:42.930 "digest": "sha512", 00:21:42.930 "dhgroup": "ffdhe8192" 00:21:42.930 } 00:21:42.930 } 00:21:42.930 ]' 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.930 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.189 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.189 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.189 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.189 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:43.189 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.757 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.016 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.583 00:21:44.583 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.583 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.583 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.842 { 00:21:44.842 "cntlid": 145, 00:21:44.842 "qid": 0, 00:21:44.842 "state": "enabled", 00:21:44.842 "thread": "nvmf_tgt_poll_group_000", 00:21:44.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.842 "listen_address": { 00:21:44.842 "trtype": "TCP", 00:21:44.842 "adrfam": "IPv4", 00:21:44.842 "traddr": "10.0.0.2", 00:21:44.842 "trsvcid": "4420" 00:21:44.842 }, 00:21:44.842 "peer_address": { 00:21:44.842 "trtype": "TCP", 00:21:44.842 "adrfam": "IPv4", 00:21:44.842 "traddr": "10.0.0.1", 00:21:44.842 "trsvcid": "51366" 00:21:44.842 }, 00:21:44.842 "auth": { 00:21:44.842 "state": "completed", 00:21:44.842 "digest": "sha512", 00:21:44.842 "dhgroup": "ffdhe8192" 00:21:44.842 } 00:21:44.842 } 00:21:44.842 ]' 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.842 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.843 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.101 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:45.101 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YWUyMzI5YjZlM2E0NTg4YjAxYzAzNWFkNjZiZDAzMmRhODRiYTk5OTk5MjEzNDNiZwNAeQ==: --dhchap-ctrl-secret DHHC-1:03:MGI1NWI2YzVlNTZkMDBhZmFlYTA2Y2Y2ZjE1MGZlNTBiZjUyYzEzZjhkMDQzOTkyZDY1Yjk5YWQyZDM1OWMxZsuHzPU=: 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:45.669 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:46.237 request: 00:21:46.237 { 00:21:46.237 "name": "nvme0", 00:21:46.237 "trtype": "tcp", 00:21:46.237 "traddr": "10.0.0.2", 00:21:46.237 "adrfam": "ipv4", 00:21:46.237 "trsvcid": "4420", 00:21:46.237 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.237 "prchk_reftag": false, 00:21:46.237 "prchk_guard": false, 00:21:46.237 "hdgst": false, 00:21:46.237 "ddgst": false, 00:21:46.237 "dhchap_key": "key2", 00:21:46.237 "allow_unrecognized_csi": false, 00:21:46.237 "method": "bdev_nvme_attach_controller", 00:21:46.237 "req_id": 1 00:21:46.237 } 00:21:46.237 Got JSON-RPC error response 00:21:46.237 response: 00:21:46.237 { 00:21:46.237 "code": -5, 00:21:46.237 "message": "Input/output error" 00:21:46.237 } 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:46.237 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:46.496 request: 00:21:46.496 { 00:21:46.496 "name": "nvme0", 00:21:46.496 "trtype": "tcp", 00:21:46.496 "traddr": "10.0.0.2", 00:21:46.496 "adrfam": "ipv4", 00:21:46.496 "trsvcid": "4420", 00:21:46.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.496 "prchk_reftag": false, 00:21:46.496 "prchk_guard": false, 00:21:46.496 "hdgst": false, 00:21:46.496 "ddgst": false, 00:21:46.496 "dhchap_key": "key1", 00:21:46.496 "dhchap_ctrlr_key": "ckey2", 00:21:46.496 "allow_unrecognized_csi": false, 00:21:46.496 "method": "bdev_nvme_attach_controller", 00:21:46.496 "req_id": 1 00:21:46.496 } 00:21:46.496 Got JSON-RPC error response 00:21:46.496 response: 00:21:46.496 { 00:21:46.496 "code": -5, 00:21:46.496 "message": "Input/output error" 00:21:46.496 } 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.496 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.497 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.064 request: 00:21:47.064 { 00:21:47.064 "name": "nvme0", 00:21:47.064 "trtype": "tcp", 00:21:47.064 "traddr": "10.0.0.2", 00:21:47.064 "adrfam": "ipv4", 00:21:47.064 "trsvcid": "4420", 00:21:47.064 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.064 "prchk_reftag": false, 00:21:47.064 "prchk_guard": false, 00:21:47.064 "hdgst": false, 00:21:47.064 "ddgst": false, 00:21:47.064 "dhchap_key": "key1", 00:21:47.064 "dhchap_ctrlr_key": "ckey1", 00:21:47.064 "allow_unrecognized_csi": false, 00:21:47.064 "method": "bdev_nvme_attach_controller", 00:21:47.064 "req_id": 1 00:21:47.064 } 00:21:47.064 Got JSON-RPC error response 00:21:47.064 response: 00:21:47.064 { 00:21:47.064 "code": -5, 00:21:47.064 "message": "Input/output error" 00:21:47.064 } 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 981791 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 981791 ']' 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 981791 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981791 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981791' 00:21:47.064 killing process with pid 981791 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 981791 00:21:47.064 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 981791 00:21:47.322 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:47.322 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1003408 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1003408 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003408 ']' 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.323 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1003408 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003408 ']' 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.582 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 null0 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uUl 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.x4C ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4C 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.x89 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.F8c ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F8c 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vwr 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.UAR ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UAR 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gmp 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.842 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.778 nvme0n1 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.778 { 00:21:48.778 "cntlid": 1, 00:21:48.778 "qid": 0, 00:21:48.778 "state": "enabled", 00:21:48.778 "thread": "nvmf_tgt_poll_group_000", 00:21:48.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.778 "listen_address": { 00:21:48.778 "trtype": "TCP", 00:21:48.778 "adrfam": "IPv4", 00:21:48.778 "traddr": "10.0.0.2", 00:21:48.778 "trsvcid": "4420" 00:21:48.778 }, 00:21:48.778 "peer_address": { 00:21:48.778 "trtype": "TCP", 00:21:48.778 "adrfam": "IPv4", 00:21:48.778 "traddr": "10.0.0.1", 00:21:48.778 "trsvcid": "51424" 00:21:48.778 }, 00:21:48.778 "auth": { 00:21:48.778 "state": "completed", 00:21:48.778 "digest": "sha512", 00:21:48.778 "dhgroup": "ffdhe8192" 00:21:48.778 } 00:21:48.778 } 00:21:48.778 ]' 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.778 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.037 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.037 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.037 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.037 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.037 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.295 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:49.296 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.863 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.122 request: 00:21:50.122 { 00:21:50.122 "name": "nvme0", 00:21:50.122 "trtype": "tcp", 00:21:50.122 "traddr": "10.0.0.2", 00:21:50.122 "adrfam": "ipv4", 00:21:50.122 "trsvcid": "4420", 00:21:50.122 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.122 "prchk_reftag": false, 00:21:50.122 "prchk_guard": false, 00:21:50.122 "hdgst": false, 00:21:50.122 "ddgst": false, 00:21:50.122 "dhchap_key": "key3", 00:21:50.122 "allow_unrecognized_csi": false, 00:21:50.122 "method": "bdev_nvme_attach_controller", 00:21:50.122 "req_id": 1 00:21:50.122 } 00:21:50.122 Got JSON-RPC error response 00:21:50.122 response: 00:21:50.122 { 00:21:50.122 "code": -5, 00:21:50.122 "message": "Input/output error" 00:21:50.122 } 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:50.122 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.380 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.639 request: 00:21:50.639 { 00:21:50.639 "name": "nvme0", 00:21:50.639 "trtype": "tcp", 00:21:50.639 "traddr": "10.0.0.2", 00:21:50.639 "adrfam": "ipv4", 00:21:50.639 "trsvcid": "4420", 00:21:50.639 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.639 "prchk_reftag": false, 00:21:50.639 "prchk_guard": false, 00:21:50.639 "hdgst": false, 00:21:50.639 "ddgst": false, 00:21:50.639 "dhchap_key": "key3", 00:21:50.639 "allow_unrecognized_csi": false, 00:21:50.639 "method": "bdev_nvme_attach_controller", 00:21:50.639 "req_id": 1 00:21:50.639 } 00:21:50.639 Got JSON-RPC error response 00:21:50.639 response: 00:21:50.639 { 00:21:50.639 "code": -5, 00:21:50.639 "message": "Input/output error" 00:21:50.639 } 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:50.639 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.640 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.898 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:51.157 request: 00:21:51.157 { 00:21:51.157 "name": "nvme0", 00:21:51.157 "trtype": "tcp", 00:21:51.157 "traddr": "10.0.0.2", 00:21:51.157 "adrfam": "ipv4", 00:21:51.157 "trsvcid": "4420", 00:21:51.157 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:51.157 "prchk_reftag": false, 00:21:51.157 "prchk_guard": false, 00:21:51.157 "hdgst": false, 00:21:51.157 "ddgst": false, 00:21:51.157 "dhchap_key": "key0", 00:21:51.157 "dhchap_ctrlr_key": "key1", 00:21:51.157 "allow_unrecognized_csi": false, 00:21:51.157 "method": "bdev_nvme_attach_controller", 00:21:51.157 "req_id": 1 00:21:51.157 } 00:21:51.157 Got JSON-RPC error response 00:21:51.157 response: 00:21:51.157 { 00:21:51.157 "code": -5, 00:21:51.157 "message": "Input/output error" 00:21:51.157 } 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:51.157 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:51.416 nvme0n1 00:21:51.416 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:51.416 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:51.416 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:51.674 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:51.933 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:52.501 nvme0n1 00:21:52.501 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:52.501 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:52.501 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:52.762 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.020 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.020 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:53.020 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: --dhchap-ctrl-secret DHHC-1:03:ZWVkODYwMTcxYzE0MjY2M2E4ZWJkOTg5ZDYxZjIyMzk0ZmUxODVhNjFmYmFlOGNlMTBhZjE5ZGNkOTRjY2VhYttj2UY=: 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.586 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:53.586 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:54.154 request: 00:21:54.154 { 00:21:54.154 "name": "nvme0", 00:21:54.154 "trtype": "tcp", 00:21:54.154 "traddr": "10.0.0.2", 00:21:54.154 "adrfam": "ipv4", 00:21:54.154 "trsvcid": "4420", 00:21:54.154 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.154 "prchk_reftag": false, 00:21:54.154 "prchk_guard": false, 00:21:54.154 "hdgst": false, 00:21:54.154 "ddgst": false, 00:21:54.154 "dhchap_key": "key1", 00:21:54.154 "allow_unrecognized_csi": false, 00:21:54.154 "method": "bdev_nvme_attach_controller", 00:21:54.154 "req_id": 1 00:21:54.154 } 00:21:54.154 Got JSON-RPC error response 00:21:54.154 response: 00:21:54.154 { 00:21:54.154 "code": -5, 00:21:54.154 "message": "Input/output error" 00:21:54.154 } 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.154 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.090 nvme0n1 00:21:55.090 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:55.090 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:55.090 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.090 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.090 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.090 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.348 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.349 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.349 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.349 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.349 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:55.349 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:55.349 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:55.607 nvme0n1 00:21:55.607 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:55.607 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:55.607 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.866 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.866 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.866 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.866 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:55.866 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.866 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: '' 2s 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: ]] 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2MyNTQ4Mzk4MmU5YjhjYTNmMjE1YWE4Yjk1MzM2N2YbZzcz: 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:56.125 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: 2s 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: ]] 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzliNjJkYjk5MDRkNjkwYzRmODUyOGY1YmFhN2ExYzZkYjk2NmMyNzM5NjM5M2U5/zNCjA==: 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:58.027 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:59.931 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:59.931 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.190 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.758 nvme0n1 00:22:00.758 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:00.758 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.758 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.758 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.758 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:00.758 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.326 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:01.326 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:01.326 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:01.585 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:01.844 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:02.102 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.102 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:02.102 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.102 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:02.102 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:02.361 request: 00:22:02.361 { 00:22:02.361 "name": "nvme0", 00:22:02.361 "dhchap_key": "key1", 00:22:02.361 "dhchap_ctrlr_key": "key3", 00:22:02.361 "method": "bdev_nvme_set_keys", 00:22:02.361 "req_id": 1 00:22:02.361 } 00:22:02.361 Got JSON-RPC error response 00:22:02.361 response: 00:22:02.361 { 00:22:02.361 "code": -13, 00:22:02.361 "message": "Permission denied" 00:22:02.361 } 00:22:02.361 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.361 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.361 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.361 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.361 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:02.362 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:02.362 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.620 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:02.620 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:03.556 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:03.556 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:03.556 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:03.815 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:04.758 nvme0n1 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:04.759 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:05.019 request: 00:22:05.019 { 00:22:05.019 "name": "nvme0", 00:22:05.019 "dhchap_key": "key2", 00:22:05.019 "dhchap_ctrlr_key": "key0", 00:22:05.019 "method": "bdev_nvme_set_keys", 00:22:05.019 "req_id": 1 00:22:05.019 } 00:22:05.019 Got JSON-RPC error response 00:22:05.019 response: 00:22:05.019 { 00:22:05.019 "code": -13, 00:22:05.019 "message": "Permission denied" 00:22:05.019 } 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:05.019 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.278 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:05.278 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:06.215 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:06.215 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:06.215 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 981813 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 981813 ']' 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 981813 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981813 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981813' 00:22:06.474 killing process with pid 981813 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 981813 00:22:06.474 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 981813 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.045 rmmod nvme_tcp 00:22:07.045 rmmod nvme_fabrics 00:22:07.045 rmmod nvme_keyring 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1003408 ']' 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1003408 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1003408 ']' 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1003408 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1003408 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.045 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1003408' 00:22:07.046 killing process with pid 1003408 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1003408 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1003408 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.046 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uUl /tmp/spdk.key-sha256.x89 /tmp/spdk.key-sha384.vwr /tmp/spdk.key-sha512.gmp /tmp/spdk.key-sha512.x4C /tmp/spdk.key-sha384.F8c /tmp/spdk.key-sha256.UAR '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:09.581 00:22:09.581 real 2m31.764s 00:22:09.581 user 5m49.813s 00:22:09.581 sys 0m24.275s 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.581 ************************************ 00:22:09.581 END TEST nvmf_auth_target 00:22:09.581 ************************************ 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:09.581 16:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:09.582 ************************************ 00:22:09.582 START TEST nvmf_bdevio_no_huge 00:22:09.582 ************************************ 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:09.582 * Looking for test storage... 00:22:09.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:09.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.582 --rc genhtml_branch_coverage=1 00:22:09.582 --rc genhtml_function_coverage=1 00:22:09.582 --rc genhtml_legend=1 00:22:09.582 --rc geninfo_all_blocks=1 00:22:09.582 --rc geninfo_unexecuted_blocks=1 00:22:09.582 00:22:09.582 ' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:09.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.582 --rc genhtml_branch_coverage=1 00:22:09.582 --rc genhtml_function_coverage=1 00:22:09.582 --rc genhtml_legend=1 00:22:09.582 --rc geninfo_all_blocks=1 00:22:09.582 --rc geninfo_unexecuted_blocks=1 00:22:09.582 00:22:09.582 ' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:09.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.582 --rc genhtml_branch_coverage=1 00:22:09.582 --rc genhtml_function_coverage=1 00:22:09.582 --rc genhtml_legend=1 00:22:09.582 --rc geninfo_all_blocks=1 00:22:09.582 --rc geninfo_unexecuted_blocks=1 00:22:09.582 00:22:09.582 ' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:09.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.582 --rc genhtml_branch_coverage=1 00:22:09.582 --rc genhtml_function_coverage=1 00:22:09.582 --rc genhtml_legend=1 00:22:09.582 --rc geninfo_all_blocks=1 00:22:09.582 --rc geninfo_unexecuted_blocks=1 00:22:09.582 00:22:09.582 ' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.582 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:09.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.583 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:14.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:14.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:14.857 Found net devices under 0000:af:00.0: cvl_0_0 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:14.857 Found net devices under 0000:af:00.1: cvl_0_1 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.857 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.117 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.117 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.117 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.117 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:22:15.376 00:22:15.376 --- 10.0.0.2 ping statistics --- 00:22:15.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.376 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:15.376 00:22:15.376 --- 10.0.0.1 ping statistics --- 00:22:15.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.376 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1010196 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1010196 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1010196 ']' 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.376 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.376 [2024-12-16 16:28:03.865859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:15.376 [2024-12-16 16:28:03.865903] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:15.376 [2024-12-16 16:28:03.931896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.376 [2024-12-16 16:28:03.967814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.376 [2024-12-16 16:28:03.967848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.376 [2024-12-16 16:28:03.967855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.376 [2024-12-16 16:28:03.967861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.376 [2024-12-16 16:28:03.967867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.376 [2024-12-16 16:28:03.968910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.376 [2024-12-16 16:28:03.969019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:15.376 [2024-12-16 16:28:03.969133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.376 [2024-12-16 16:28:03.969134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:15.635 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.635 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.636 [2024-12-16 16:28:04.117515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.636 Malloc0 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.636 [2024-12-16 16:28:04.161809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.636 { 00:22:15.636 "params": { 00:22:15.636 "name": "Nvme$subsystem", 00:22:15.636 "trtype": "$TEST_TRANSPORT", 00:22:15.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.636 "adrfam": "ipv4", 00:22:15.636 "trsvcid": "$NVMF_PORT", 00:22:15.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.636 "hdgst": ${hdgst:-false}, 00:22:15.636 "ddgst": ${ddgst:-false} 00:22:15.636 }, 00:22:15.636 "method": "bdev_nvme_attach_controller" 00:22:15.636 } 00:22:15.636 EOF 00:22:15.636 )") 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:15.636 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:15.636 "params": { 00:22:15.636 "name": "Nvme1", 00:22:15.636 "trtype": "tcp", 00:22:15.636 "traddr": "10.0.0.2", 00:22:15.636 "adrfam": "ipv4", 00:22:15.636 "trsvcid": "4420", 00:22:15.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.636 "hdgst": false, 00:22:15.636 "ddgst": false 00:22:15.636 }, 00:22:15.636 "method": "bdev_nvme_attach_controller" 00:22:15.636 }' 00:22:15.636 [2024-12-16 16:28:04.211573] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:15.636 [2024-12-16 16:28:04.211615] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1010228 ] 00:22:15.895 [2024-12-16 16:28:04.279550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.895 [2024-12-16 16:28:04.329090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.895 [2024-12-16 16:28:04.329205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.895 [2024-12-16 16:28:04.329206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.154 I/O targets: 00:22:16.154 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:16.154 00:22:16.154 00:22:16.154 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.154 http://cunit.sourceforge.net/ 00:22:16.154 00:22:16.154 00:22:16.154 Suite: bdevio tests on: Nvme1n1 00:22:16.154 Test: blockdev write read block ...passed 00:22:16.154 Test: blockdev write zeroes read block ...passed 00:22:16.154 Test: blockdev write zeroes read no split ...passed 00:22:16.154 Test: blockdev write zeroes read split ...passed 00:22:16.154 Test: blockdev write zeroes read split partial ...passed 00:22:16.154 Test: blockdev reset ...[2024-12-16 16:28:04.755535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:16.154 [2024-12-16 16:28:04.755598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167dd00 (9): Bad file descriptor 00:22:16.413 [2024-12-16 16:28:04.769964] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:16.413 passed 00:22:16.413 Test: blockdev write read 8 blocks ...passed 00:22:16.413 Test: blockdev write read size > 128k ...passed 00:22:16.413 Test: blockdev write read invalid size ...passed 00:22:16.413 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.413 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.413 Test: blockdev write read max offset ...passed 00:22:16.413 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.413 Test: blockdev writev readv 8 blocks ...passed 00:22:16.413 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.413 Test: blockdev writev readv block ...passed 00:22:16.413 Test: blockdev writev readv size > 128k ...passed 00:22:16.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.674 Test: blockdev comparev and writev ...[2024-12-16 16:28:05.022786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.022819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.022833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.022841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.023070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.023081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.023093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.023105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.023339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.023349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.023360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.023367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.023594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.023603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.023614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.674 [2024-12-16 16:28:05.023620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:16.674 passed 00:22:16.674 Test: blockdev nvme passthru rw ...passed 00:22:16.674 Test: blockdev nvme passthru vendor specific ...[2024-12-16 16:28:05.105524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.674 [2024-12-16 16:28:05.105541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.105649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.674 [2024-12-16 16:28:05.105662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.105761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.674 [2024-12-16 16:28:05.105770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:16.674 [2024-12-16 16:28:05.105868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.674 [2024-12-16 16:28:05.105877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:16.674 passed 00:22:16.674 Test: blockdev nvme admin passthru ...passed 00:22:16.674 Test: blockdev copy ...passed 00:22:16.674 00:22:16.674 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.674 suites 1 1 n/a 0 0 00:22:16.674 tests 23 23 23 0 0 00:22:16.674 asserts 152 152 152 0 n/a 00:22:16.674 00:22:16.674 Elapsed time = 1.062 seconds 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.935 rmmod nvme_tcp 00:22:16.935 rmmod nvme_fabrics 00:22:16.935 rmmod nvme_keyring 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:16.935 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1010196 ']' 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1010196 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1010196 ']' 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1010196 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.936 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010196 00:22:17.194 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:17.194 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:17.194 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010196' 00:22:17.194 killing process with pid 1010196 00:22:17.194 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1010196 00:22:17.194 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1010196 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.454 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.360 00:22:19.360 real 0m10.145s 00:22:19.360 user 0m11.196s 00:22:19.360 sys 0m5.168s 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 ************************************ 00:22:19.360 END TEST nvmf_bdevio_no_huge 00:22:19.360 ************************************ 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.360 16:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.620 ************************************ 00:22:19.620 START TEST nvmf_tls 00:22:19.620 ************************************ 00:22:19.620 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:19.620 * Looking for test storage... 00:22:19.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.620 --rc genhtml_branch_coverage=1 00:22:19.620 --rc genhtml_function_coverage=1 00:22:19.620 --rc genhtml_legend=1 00:22:19.620 --rc geninfo_all_blocks=1 00:22:19.620 --rc geninfo_unexecuted_blocks=1 00:22:19.620 00:22:19.620 ' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.620 --rc genhtml_branch_coverage=1 00:22:19.620 --rc genhtml_function_coverage=1 00:22:19.620 --rc genhtml_legend=1 00:22:19.620 --rc geninfo_all_blocks=1 00:22:19.620 --rc geninfo_unexecuted_blocks=1 00:22:19.620 00:22:19.620 ' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.620 --rc genhtml_branch_coverage=1 00:22:19.620 --rc genhtml_function_coverage=1 00:22:19.620 --rc genhtml_legend=1 00:22:19.620 --rc geninfo_all_blocks=1 00:22:19.620 --rc geninfo_unexecuted_blocks=1 00:22:19.620 00:22:19.620 ' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.620 --rc genhtml_branch_coverage=1 00:22:19.620 --rc genhtml_function_coverage=1 00:22:19.620 --rc genhtml_legend=1 00:22:19.620 --rc geninfo_all_blocks=1 00:22:19.620 --rc geninfo_unexecuted_blocks=1 00:22:19.620 00:22:19.620 ' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.620 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.621 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:26.193 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:26.193 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:26.193 Found net devices under 0000:af:00.0: cvl_0_0 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:26.193 Found net devices under 0000:af:00.1: cvl_0_1 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.193 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:22:26.193 00:22:26.193 --- 10.0.0.2 ping statistics --- 00:22:26.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.193 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:26.193 00:22:26.193 --- 10.0.0.1 ping statistics --- 00:22:26.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.193 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1013920 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1013920 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1013920 ']' 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.193 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.193 [2024-12-16 16:28:14.139484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:26.194 [2024-12-16 16:28:14.139525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.194 [2024-12-16 16:28:14.220683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.194 [2024-12-16 16:28:14.241972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.194 [2024-12-16 16:28:14.242006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.194 [2024-12-16 16:28:14.242013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.194 [2024-12-16 16:28:14.242022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.194 [2024-12-16 16:28:14.242027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.194 [2024-12-16 16:28:14.242540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:26.194 true 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:26.194 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:26.452 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.452 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:26.713 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:26.713 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:26.713 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:26.975 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.975 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:26.975 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:26.975 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:26.975 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.975 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:27.233 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:27.233 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:27.233 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:27.491 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.491 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:27.751 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:27.751 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:27.751 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:27.751 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.751 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.kJ7TIQJX5s 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.sxyMsMqjrv 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kJ7TIQJX5s 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.sxyMsMqjrv 00:22:28.010 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:28.269 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:28.528 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.kJ7TIQJX5s 00:22:28.528 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kJ7TIQJX5s 00:22:28.528 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:28.787 [2024-12-16 16:28:17.200446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.787 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.047 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:29.047 [2024-12-16 16:28:17.557348] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.047 [2024-12-16 16:28:17.557579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.047 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:29.306 malloc0 00:22:29.306 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:29.564 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kJ7TIQJX5s 00:22:29.564 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.823 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kJ7TIQJX5s 00:22:39.979 Initializing NVMe Controllers 00:22:39.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:39.979 Initialization complete. Launching workers. 00:22:39.979 ======================================================== 00:22:39.979 Latency(us) 00:22:39.979 Device Information : IOPS MiB/s Average min max 00:22:39.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16889.79 65.98 3789.35 735.00 5983.94 00:22:39.979 ======================================================== 00:22:39.979 Total : 16889.79 65.98 3789.35 735.00 5983.94 00:22:39.979 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJ7TIQJX5s 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kJ7TIQJX5s 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1016211 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1016211 /var/tmp/bdevperf.sock 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1016211 ']' 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.979 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.979 [2024-12-16 16:28:28.482678] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:39.979 [2024-12-16 16:28:28.482721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016211 ] 00:22:39.979 [2024-12-16 16:28:28.557496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.979 [2024-12-16 16:28:28.579637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.241 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.241 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:40.241 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kJ7TIQJX5s 00:22:40.501 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.501 [2024-12-16 16:28:29.034674] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.501 TLSTESTn1 00:22:40.760 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:40.760 Running I/O for 10 seconds... 00:22:42.632 5498.00 IOPS, 21.48 MiB/s [2024-12-16T15:28:32.618Z] 5543.50 IOPS, 21.65 MiB/s [2024-12-16T15:28:33.553Z] 5570.33 IOPS, 21.76 MiB/s [2024-12-16T15:28:34.489Z] 5573.50 IOPS, 21.77 MiB/s [2024-12-16T15:28:35.423Z] 5581.00 IOPS, 21.80 MiB/s [2024-12-16T15:28:36.358Z] 5541.83 IOPS, 21.65 MiB/s [2024-12-16T15:28:37.294Z] 5478.86 IOPS, 21.40 MiB/s [2024-12-16T15:28:38.230Z] 5420.12 IOPS, 21.17 MiB/s [2024-12-16T15:28:39.608Z] 5383.44 IOPS, 21.03 MiB/s [2024-12-16T15:28:39.608Z] 5329.70 IOPS, 20.82 MiB/s 00:22:50.999 Latency(us) 00:22:50.999 [2024-12-16T15:28:39.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.999 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.999 Verification LBA range: start 0x0 length 0x2000 00:22:50.999 TLSTESTn1 : 10.02 5333.37 20.83 0.00 0.00 23964.25 4899.60 33204.91 00:22:50.999 [2024-12-16T15:28:39.608Z] =================================================================================================================== 00:22:50.999 [2024-12-16T15:28:39.608Z] Total : 5333.37 20.83 0.00 0.00 23964.25 4899.60 33204.91 00:22:50.999 { 00:22:50.999 "results": [ 00:22:50.999 { 00:22:50.999 "job": "TLSTESTn1", 00:22:50.999 "core_mask": "0x4", 00:22:50.999 "workload": "verify", 00:22:50.999 "status": "finished", 00:22:50.999 "verify_range": { 00:22:50.999 "start": 0, 00:22:50.999 "length": 8192 00:22:50.999 }, 00:22:50.999 "queue_depth": 128, 00:22:50.999 "io_size": 4096, 00:22:50.999 "runtime": 10.01694, 00:22:50.999 "iops": 5333.365279217006, 00:22:50.999 "mibps": 20.83345812194143, 00:22:50.999 "io_failed": 0, 00:22:50.999 "io_timeout": 0, 00:22:50.999 "avg_latency_us": 23964.252040103253, 00:22:50.999 "min_latency_us": 4899.596190476191, 00:22:50.999 "max_latency_us": 33204.90666666667 00:22:50.999 } 00:22:50.999 ], 00:22:50.999 "core_count": 1 00:22:50.999 } 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1016211 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1016211 ']' 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1016211 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016211 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016211' 00:22:50.999 killing process with pid 1016211 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1016211 00:22:50.999 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.999 00:22:50.999 Latency(us) 00:22:50.999 [2024-12-16T15:28:39.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.999 [2024-12-16T15:28:39.608Z] =================================================================================================================== 00:22:50.999 [2024-12-16T15:28:39.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1016211 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sxyMsMqjrv 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sxyMsMqjrv 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sxyMsMqjrv 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sxyMsMqjrv 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.999 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1017991 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1017991 /var/tmp/bdevperf.sock 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1017991 ']' 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.000 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.000 [2024-12-16 16:28:39.522131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:51.000 [2024-12-16 16:28:39.522191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017991 ] 00:22:51.000 [2024-12-16 16:28:39.596818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.259 [2024-12-16 16:28:39.619749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.259 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.259 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:51.259 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sxyMsMqjrv 00:22:51.517 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:51.517 [2024-12-16 16:28:40.066757] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.517 [2024-12-16 16:28:40.071437] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:51.517 [2024-12-16 16:28:40.072016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d90c0 (107): Transport endpoint is not connected 00:22:51.517 [2024-12-16 16:28:40.073008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d90c0 (9): Bad file descriptor 00:22:51.517 [2024-12-16 16:28:40.074009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:51.517 [2024-12-16 16:28:40.074022] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:51.518 [2024-12-16 16:28:40.074029] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:51.518 [2024-12-16 16:28:40.074038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:51.518 request: 00:22:51.518 { 00:22:51.518 "name": "TLSTEST", 00:22:51.518 "trtype": "tcp", 00:22:51.518 "traddr": "10.0.0.2", 00:22:51.518 "adrfam": "ipv4", 00:22:51.518 "trsvcid": "4420", 00:22:51.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.518 "prchk_reftag": false, 00:22:51.518 "prchk_guard": false, 00:22:51.518 "hdgst": false, 00:22:51.518 "ddgst": false, 00:22:51.518 "psk": "key0", 00:22:51.518 "allow_unrecognized_csi": false, 00:22:51.518 "method": "bdev_nvme_attach_controller", 00:22:51.518 "req_id": 1 00:22:51.518 } 00:22:51.518 Got JSON-RPC error response 00:22:51.518 response: 00:22:51.518 { 00:22:51.518 "code": -5, 00:22:51.518 "message": "Input/output error" 00:22:51.518 } 00:22:51.518 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1017991 00:22:51.518 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1017991 ']' 00:22:51.518 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1017991 00:22:51.518 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:51.518 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.518 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017991 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017991' 00:22:51.777 killing process with pid 1017991 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1017991 00:22:51.777 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.777 00:22:51.777 Latency(us) 00:22:51.777 [2024-12-16T15:28:40.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.777 [2024-12-16T15:28:40.386Z] =================================================================================================================== 00:22:51.777 [2024-12-16T15:28:40.386Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1017991 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kJ7TIQJX5s 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kJ7TIQJX5s 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:51.777 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kJ7TIQJX5s 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kJ7TIQJX5s 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018217 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018217 /var/tmp/bdevperf.sock 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018217 ']' 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.778 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.778 [2024-12-16 16:28:40.356032] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:51.778 [2024-12-16 16:28:40.356083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018217 ] 00:22:52.036 [2024-12-16 16:28:40.431539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.036 [2024-12-16 16:28:40.451969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.036 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.036 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.036 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kJ7TIQJX5s 00:22:52.295 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:52.295 [2024-12-16 16:28:40.899522] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.555 [2024-12-16 16:28:40.911074] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:52.555 [2024-12-16 16:28:40.911103] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:52.555 [2024-12-16 16:28:40.911126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.555 [2024-12-16 16:28:40.911841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7650c0 (107): Transport endpoint is not connected 00:22:52.556 [2024-12-16 16:28:40.912834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7650c0 (9): Bad file descriptor 00:22:52.556 [2024-12-16 16:28:40.913836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:52.556 [2024-12-16 16:28:40.913846] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.556 [2024-12-16 16:28:40.913853] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:52.556 [2024-12-16 16:28:40.913861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:52.556 request: 00:22:52.556 { 00:22:52.556 "name": "TLSTEST", 00:22:52.556 "trtype": "tcp", 00:22:52.556 "traddr": "10.0.0.2", 00:22:52.556 "adrfam": "ipv4", 00:22:52.556 "trsvcid": "4420", 00:22:52.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.556 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:52.556 "prchk_reftag": false, 00:22:52.556 "prchk_guard": false, 00:22:52.556 "hdgst": false, 00:22:52.556 "ddgst": false, 00:22:52.556 "psk": "key0", 00:22:52.556 "allow_unrecognized_csi": false, 00:22:52.556 "method": "bdev_nvme_attach_controller", 00:22:52.556 "req_id": 1 00:22:52.556 } 00:22:52.556 Got JSON-RPC error response 00:22:52.556 response: 00:22:52.556 { 00:22:52.556 "code": -5, 00:22:52.556 "message": "Input/output error" 00:22:52.556 } 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018217 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018217 ']' 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018217 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018217 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018217' 00:22:52.556 killing process with pid 1018217 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018217 00:22:52.556 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.556 00:22:52.556 Latency(us) 00:22:52.556 [2024-12-16T15:28:41.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.556 [2024-12-16T15:28:41.165Z] =================================================================================================================== 00:22:52.556 [2024-12-16T15:28:41.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.556 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018217 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJ7TIQJX5s 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJ7TIQJX5s 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJ7TIQJX5s 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kJ7TIQJX5s 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018237 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018237 /var/tmp/bdevperf.sock 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018237 ']' 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.556 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.815 [2024-12-16 16:28:41.186990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:52.815 [2024-12-16 16:28:41.187042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018237 ] 00:22:52.815 [2024-12-16 16:28:41.263083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.815 [2024-12-16 16:28:41.282915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.815 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.815 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.815 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kJ7TIQJX5s 00:22:53.073 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.332 [2024-12-16 16:28:41.778093] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.332 [2024-12-16 16:28:41.783297] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:53.332 [2024-12-16 16:28:41.783316] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:53.332 [2024-12-16 16:28:41.783355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.332 [2024-12-16 16:28:41.783403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc60c0 (107): Transport endpoint is not connected 00:22:53.332 [2024-12-16 16:28:41.784385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc60c0 (9): Bad file descriptor 00:22:53.332 [2024-12-16 16:28:41.785385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:53.332 [2024-12-16 16:28:41.785395] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.332 [2024-12-16 16:28:41.785402] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:53.332 [2024-12-16 16:28:41.785410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:53.332 request: 00:22:53.332 { 00:22:53.332 "name": "TLSTEST", 00:22:53.332 "trtype": "tcp", 00:22:53.332 "traddr": "10.0.0.2", 00:22:53.332 "adrfam": "ipv4", 00:22:53.332 "trsvcid": "4420", 00:22:53.332 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.332 "prchk_reftag": false, 00:22:53.332 "prchk_guard": false, 00:22:53.332 "hdgst": false, 00:22:53.332 "ddgst": false, 00:22:53.332 "psk": "key0", 00:22:53.332 "allow_unrecognized_csi": false, 00:22:53.332 "method": "bdev_nvme_attach_controller", 00:22:53.332 "req_id": 1 00:22:53.332 } 00:22:53.332 Got JSON-RPC error response 00:22:53.332 response: 00:22:53.332 { 00:22:53.332 "code": -5, 00:22:53.332 "message": "Input/output error" 00:22:53.332 } 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018237 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018237 ']' 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018237 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018237 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018237' 00:22:53.332 killing process with pid 1018237 00:22:53.332 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018237 00:22:53.332 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.332 00:22:53.332 Latency(us) 00:22:53.332 [2024-12-16T15:28:41.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.333 [2024-12-16T15:28:41.942Z] =================================================================================================================== 00:22:53.333 [2024-12-16T15:28:41.942Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.333 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018237 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018464 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018464 /var/tmp/bdevperf.sock 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018464 ']' 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.592 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.592 [2024-12-16 16:28:42.058887] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:53.592 [2024-12-16 16:28:42.058936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018464 ] 00:22:53.592 [2024-12-16 16:28:42.134413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.592 [2024-12-16 16:28:42.153974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.851 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.851 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.851 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:53.852 [2024-12-16 16:28:42.408548] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:53.852 [2024-12-16 16:28:42.408581] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:53.852 request: 00:22:53.852 { 00:22:53.852 "name": "key0", 00:22:53.852 "path": "", 00:22:53.852 "method": "keyring_file_add_key", 00:22:53.852 "req_id": 1 00:22:53.852 } 00:22:53.852 Got JSON-RPC error response 00:22:53.852 response: 00:22:53.852 { 00:22:53.852 "code": -1, 00:22:53.852 "message": "Operation not permitted" 00:22:53.852 } 00:22:53.852 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.111 [2024-12-16 16:28:42.597120] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.111 [2024-12-16 16:28:42.597150] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:54.111 request: 00:22:54.111 { 00:22:54.111 "name": "TLSTEST", 00:22:54.111 "trtype": "tcp", 00:22:54.111 "traddr": "10.0.0.2", 00:22:54.111 "adrfam": "ipv4", 00:22:54.111 "trsvcid": "4420", 00:22:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.111 "prchk_reftag": false, 00:22:54.111 "prchk_guard": false, 00:22:54.111 "hdgst": false, 00:22:54.111 "ddgst": false, 00:22:54.111 "psk": "key0", 00:22:54.111 "allow_unrecognized_csi": false, 00:22:54.111 "method": "bdev_nvme_attach_controller", 00:22:54.111 "req_id": 1 00:22:54.111 } 00:22:54.111 Got JSON-RPC error response 00:22:54.111 response: 00:22:54.111 { 00:22:54.111 "code": -126, 00:22:54.111 "message": "Required key not available" 00:22:54.111 } 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018464 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018464 ']' 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018464 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018464 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018464' 00:22:54.111 killing process with pid 1018464 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018464 00:22:54.111 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.111 00:22:54.111 Latency(us) 00:22:54.111 [2024-12-16T15:28:42.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.111 [2024-12-16T15:28:42.720Z] =================================================================================================================== 00:22:54.111 [2024-12-16T15:28:42.720Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.111 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018464 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1013920 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1013920 ']' 00:22:54.370 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1013920 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1013920 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1013920' 00:22:54.371 killing process with pid 1013920 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1013920 00:22:54.371 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1013920 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.YuToDBlboT 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.YuToDBlboT 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1018702 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1018702 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018702 ']' 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.630 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.630 [2024-12-16 16:28:43.128061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:54.630 [2024-12-16 16:28:43.128111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.630 [2024-12-16 16:28:43.200640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.630 [2024-12-16 16:28:43.221099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.630 [2024-12-16 16:28:43.221134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.630 [2024-12-16 16:28:43.221141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.630 [2024-12-16 16:28:43.221147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.630 [2024-12-16 16:28:43.221152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.630 [2024-12-16 16:28:43.221645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.YuToDBlboT 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YuToDBlboT 00:22:54.889 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.148 [2024-12-16 16:28:43.515864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.148 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.148 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.407 [2024-12-16 16:28:43.900871] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.407 [2024-12-16 16:28:43.901073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.407 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.666 malloc0 00:22:55.666 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.666 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:22:55.925 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YuToDBlboT 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YuToDBlboT 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018946 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018946 /var/tmp/bdevperf.sock 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018946 ']' 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.185 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.185 [2024-12-16 16:28:44.668463] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:56.185 [2024-12-16 16:28:44.668509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018946 ] 00:22:56.185 [2024-12-16 16:28:44.741926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.185 [2024-12-16 16:28:44.763988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.444 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.444 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.444 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:22:56.444 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.704 [2024-12-16 16:28:45.210885] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.704 TLSTESTn1 00:22:56.704 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.962 Running I/O for 10 seconds... 00:22:58.834 5487.00 IOPS, 21.43 MiB/s [2024-12-16T15:28:48.820Z] 5604.00 IOPS, 21.89 MiB/s [2024-12-16T15:28:49.386Z] 5573.67 IOPS, 21.77 MiB/s [2024-12-16T15:28:50.763Z] 5513.50 IOPS, 21.54 MiB/s [2024-12-16T15:28:51.698Z] 5524.40 IOPS, 21.58 MiB/s [2024-12-16T15:28:52.634Z] 5534.17 IOPS, 21.62 MiB/s [2024-12-16T15:28:53.570Z] 5538.57 IOPS, 21.64 MiB/s [2024-12-16T15:28:54.505Z] 5559.62 IOPS, 21.72 MiB/s [2024-12-16T15:28:55.442Z] 5560.22 IOPS, 21.72 MiB/s [2024-12-16T15:28:55.442Z] 5566.30 IOPS, 21.74 MiB/s 00:23:06.833 Latency(us) 00:23:06.833 [2024-12-16T15:28:55.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.833 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.833 Verification LBA range: start 0x0 length 0x2000 00:23:06.833 TLSTESTn1 : 10.02 5569.55 21.76 0.00 0.00 22947.43 5898.24 23842.62 00:23:06.833 [2024-12-16T15:28:55.442Z] =================================================================================================================== 00:23:06.833 [2024-12-16T15:28:55.442Z] Total : 5569.55 21.76 0.00 0.00 22947.43 5898.24 23842.62 00:23:06.833 { 00:23:06.833 "results": [ 00:23:06.833 { 00:23:06.833 "job": "TLSTESTn1", 00:23:06.833 "core_mask": "0x4", 00:23:06.833 "workload": "verify", 00:23:06.833 "status": "finished", 00:23:06.833 "verify_range": { 00:23:06.833 "start": 0, 00:23:06.833 "length": 8192 00:23:06.833 }, 00:23:06.833 "queue_depth": 128, 00:23:06.833 "io_size": 4096, 00:23:06.833 "runtime": 10.016784, 00:23:06.833 "iops": 5569.5520638160915, 00:23:06.833 "mibps": 21.756062749281607, 00:23:06.833 "io_failed": 0, 00:23:06.833 "io_timeout": 0, 00:23:06.833 "avg_latency_us": 22947.43485551427, 00:23:06.833 "min_latency_us": 5898.24, 00:23:06.833 "max_latency_us": 23842.620952380952 00:23:06.833 } 00:23:06.833 ], 00:23:06.833 "core_count": 1 00:23:06.833 } 00:23:06.833 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.833 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1018946 00:23:06.833 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018946 ']' 00:23:06.833 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018946 00:23:06.833 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.833 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.091 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018946 00:23:07.091 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.091 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.091 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018946' 00:23:07.091 killing process with pid 1018946 00:23:07.091 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018946 00:23:07.091 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.091 00:23:07.091 Latency(us) 00:23:07.091 [2024-12-16T15:28:55.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.092 [2024-12-16T15:28:55.701Z] =================================================================================================================== 00:23:07.092 [2024-12-16T15:28:55.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018946 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.YuToDBlboT 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YuToDBlboT 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YuToDBlboT 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YuToDBlboT 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YuToDBlboT 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1020676 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1020676 /var/tmp/bdevperf.sock 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020676 ']' 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.092 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.092 [2024-12-16 16:28:55.689932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:07.092 [2024-12-16 16:28:55.689976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020676 ] 00:23:07.350 [2024-12-16 16:28:55.764362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.350 [2024-12-16 16:28:55.787215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.350 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.350 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.350 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:07.608 [2024-12-16 16:28:56.034583] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YuToDBlboT': 0100666 00:23:07.608 [2024-12-16 16:28:56.034608] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:07.608 request: 00:23:07.608 { 00:23:07.608 "name": "key0", 00:23:07.608 "path": "/tmp/tmp.YuToDBlboT", 00:23:07.608 "method": "keyring_file_add_key", 00:23:07.608 "req_id": 1 00:23:07.608 } 00:23:07.608 Got JSON-RPC error response 00:23:07.608 response: 00:23:07.608 { 00:23:07.608 "code": -1, 00:23:07.608 "message": "Operation not permitted" 00:23:07.608 } 00:23:07.609 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.868 [2024-12-16 16:28:56.235175] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.868 [2024-12-16 16:28:56.235204] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:07.868 request: 00:23:07.868 { 00:23:07.868 "name": "TLSTEST", 00:23:07.868 "trtype": "tcp", 00:23:07.868 "traddr": "10.0.0.2", 00:23:07.868 "adrfam": "ipv4", 00:23:07.868 "trsvcid": "4420", 00:23:07.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.868 "prchk_reftag": false, 00:23:07.868 "prchk_guard": false, 00:23:07.868 "hdgst": false, 00:23:07.868 "ddgst": false, 00:23:07.868 "psk": "key0", 00:23:07.868 "allow_unrecognized_csi": false, 00:23:07.868 "method": "bdev_nvme_attach_controller", 00:23:07.868 "req_id": 1 00:23:07.868 } 00:23:07.868 Got JSON-RPC error response 00:23:07.868 response: 00:23:07.868 { 00:23:07.868 "code": -126, 00:23:07.868 "message": "Required key not available" 00:23:07.868 } 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1020676 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020676 ']' 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020676 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020676 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020676' 00:23:07.868 killing process with pid 1020676 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020676 00:23:07.868 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.868 00:23:07.868 Latency(us) 00:23:07.868 [2024-12-16T15:28:56.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.868 [2024-12-16T15:28:56.477Z] =================================================================================================================== 00:23:07.868 [2024-12-16T15:28:56.477Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020676 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1018702 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018702 ']' 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018702 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.868 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018702 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018702' 00:23:08.127 killing process with pid 1018702 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018702 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018702 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1020761 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1020761 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020761 ']' 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.127 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.127 [2024-12-16 16:28:56.728460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:08.127 [2024-12-16 16:28:56.728509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.386 [2024-12-16 16:28:56.806800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.386 [2024-12-16 16:28:56.827200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.386 [2024-12-16 16:28:56.827238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.386 [2024-12-16 16:28:56.827245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.386 [2024-12-16 16:28:56.827250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.386 [2024-12-16 16:28:56.827255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.386 [2024-12-16 16:28:56.827764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.YuToDBlboT 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YuToDBlboT 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.YuToDBlboT 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YuToDBlboT 00:23:08.386 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.645 [2024-12-16 16:28:57.130045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.645 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.904 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.163 [2024-12-16 16:28:57.515038] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.163 [2024-12-16 16:28:57.515243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.163 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.163 malloc0 00:23:09.163 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.421 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:09.680 [2024-12-16 16:28:58.068299] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YuToDBlboT': 0100666 00:23:09.680 [2024-12-16 16:28:58.068324] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:09.680 request: 00:23:09.680 { 00:23:09.680 "name": "key0", 00:23:09.680 "path": "/tmp/tmp.YuToDBlboT", 00:23:09.680 "method": "keyring_file_add_key", 00:23:09.680 "req_id": 1 00:23:09.680 } 00:23:09.680 Got JSON-RPC error response 00:23:09.680 response: 00:23:09.680 { 00:23:09.680 "code": -1, 00:23:09.680 "message": "Operation not permitted" 00:23:09.680 } 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:09.680 [2024-12-16 16:28:58.252808] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:09.680 [2024-12-16 16:28:58.252840] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:09.680 request: 00:23:09.680 { 00:23:09.680 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.680 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.680 "psk": "key0", 00:23:09.680 "method": "nvmf_subsystem_add_host", 00:23:09.680 "req_id": 1 00:23:09.680 } 00:23:09.680 Got JSON-RPC error response 00:23:09.680 response: 00:23:09.680 { 00:23:09.680 "code": -32603, 00:23:09.680 "message": "Internal error" 00:23:09.680 } 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1020761 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020761 ']' 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020761 00:23:09.680 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.939 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.939 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020761 00:23:09.939 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.939 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.939 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020761' 00:23:09.939 killing process with pid 1020761 00:23:09.939 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020761 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020761 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.YuToDBlboT 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021205 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021205 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021205 ']' 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.940 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.199 [2024-12-16 16:28:58.554510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:10.199 [2024-12-16 16:28:58.554554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.199 [2024-12-16 16:28:58.625573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.199 [2024-12-16 16:28:58.645383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.199 [2024-12-16 16:28:58.645415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.199 [2024-12-16 16:28:58.645421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.199 [2024-12-16 16:28:58.645427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.199 [2024-12-16 16:28:58.645432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.199 [2024-12-16 16:28:58.645937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.YuToDBlboT 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YuToDBlboT 00:23:10.199 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.474 [2024-12-16 16:28:58.943706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.474 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.732 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.732 [2024-12-16 16:28:59.324675] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.732 [2024-12-16 16:28:59.324878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.991 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.991 malloc0 00:23:10.991 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.249 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:11.509 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1021474 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1021474 /var/tmp/bdevperf.sock 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021474 ']' 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.768 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.768 [2024-12-16 16:29:00.199845] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:11.768 [2024-12-16 16:29:00.199899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021474 ] 00:23:11.768 [2024-12-16 16:29:00.273001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.768 [2024-12-16 16:29:00.295159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.026 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.027 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.027 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:12.027 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.285 [2024-12-16 16:29:00.742316] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.285 TLSTESTn1 00:23:12.285 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:12.545 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:12.545 "subsystems": [ 00:23:12.545 { 00:23:12.545 "subsystem": "keyring", 00:23:12.545 "config": [ 00:23:12.545 { 00:23:12.545 "method": "keyring_file_add_key", 00:23:12.545 "params": { 00:23:12.545 "name": "key0", 00:23:12.545 "path": "/tmp/tmp.YuToDBlboT" 00:23:12.545 } 00:23:12.545 } 00:23:12.545 ] 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "subsystem": "iobuf", 00:23:12.545 "config": [ 00:23:12.545 { 00:23:12.545 "method": "iobuf_set_options", 00:23:12.545 "params": { 00:23:12.545 "small_pool_count": 8192, 00:23:12.545 "large_pool_count": 1024, 00:23:12.545 "small_bufsize": 8192, 00:23:12.545 "large_bufsize": 135168, 00:23:12.545 "enable_numa": false 00:23:12.545 } 00:23:12.545 } 00:23:12.545 ] 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "subsystem": "sock", 00:23:12.545 "config": [ 00:23:12.545 { 00:23:12.545 "method": "sock_set_default_impl", 00:23:12.545 "params": { 00:23:12.545 "impl_name": "posix" 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "sock_impl_set_options", 00:23:12.545 "params": { 00:23:12.545 "impl_name": "ssl", 00:23:12.545 "recv_buf_size": 4096, 00:23:12.545 "send_buf_size": 4096, 00:23:12.545 "enable_recv_pipe": true, 00:23:12.545 "enable_quickack": false, 00:23:12.545 "enable_placement_id": 0, 00:23:12.545 "enable_zerocopy_send_server": true, 00:23:12.545 "enable_zerocopy_send_client": false, 00:23:12.545 "zerocopy_threshold": 0, 00:23:12.545 "tls_version": 0, 00:23:12.545 "enable_ktls": false 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "sock_impl_set_options", 00:23:12.545 "params": { 00:23:12.545 "impl_name": "posix", 00:23:12.545 "recv_buf_size": 2097152, 00:23:12.545 "send_buf_size": 2097152, 00:23:12.545 "enable_recv_pipe": true, 00:23:12.545 "enable_quickack": false, 00:23:12.545 "enable_placement_id": 0, 00:23:12.545 "enable_zerocopy_send_server": true, 00:23:12.545 "enable_zerocopy_send_client": false, 00:23:12.545 "zerocopy_threshold": 0, 00:23:12.545 "tls_version": 0, 00:23:12.545 "enable_ktls": false 00:23:12.545 } 00:23:12.545 } 00:23:12.545 ] 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "subsystem": "vmd", 00:23:12.545 "config": [] 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "subsystem": "accel", 00:23:12.545 "config": [ 00:23:12.545 { 00:23:12.545 "method": "accel_set_options", 00:23:12.545 "params": { 00:23:12.545 "small_cache_size": 128, 00:23:12.545 "large_cache_size": 16, 00:23:12.545 "task_count": 2048, 00:23:12.545 "sequence_count": 2048, 00:23:12.545 "buf_count": 2048 00:23:12.545 } 00:23:12.545 } 00:23:12.545 ] 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "subsystem": "bdev", 00:23:12.545 "config": [ 00:23:12.545 { 00:23:12.545 "method": "bdev_set_options", 00:23:12.545 "params": { 00:23:12.545 "bdev_io_pool_size": 65535, 00:23:12.545 "bdev_io_cache_size": 256, 00:23:12.545 "bdev_auto_examine": true, 00:23:12.545 "iobuf_small_cache_size": 128, 00:23:12.545 "iobuf_large_cache_size": 16 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "bdev_raid_set_options", 00:23:12.545 "params": { 00:23:12.545 "process_window_size_kb": 1024, 00:23:12.545 "process_max_bandwidth_mb_sec": 0 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "bdev_iscsi_set_options", 00:23:12.545 "params": { 00:23:12.545 "timeout_sec": 30 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "bdev_nvme_set_options", 00:23:12.545 "params": { 00:23:12.545 "action_on_timeout": "none", 00:23:12.545 "timeout_us": 0, 00:23:12.545 "timeout_admin_us": 0, 00:23:12.545 "keep_alive_timeout_ms": 10000, 00:23:12.545 "arbitration_burst": 0, 00:23:12.545 "low_priority_weight": 0, 00:23:12.545 "medium_priority_weight": 0, 00:23:12.545 "high_priority_weight": 0, 00:23:12.545 "nvme_adminq_poll_period_us": 10000, 00:23:12.545 "nvme_ioq_poll_period_us": 0, 00:23:12.545 "io_queue_requests": 0, 00:23:12.545 "delay_cmd_submit": true, 00:23:12.545 "transport_retry_count": 4, 00:23:12.545 "bdev_retry_count": 3, 00:23:12.545 "transport_ack_timeout": 0, 00:23:12.545 "ctrlr_loss_timeout_sec": 0, 00:23:12.545 "reconnect_delay_sec": 0, 00:23:12.545 "fast_io_fail_timeout_sec": 0, 00:23:12.545 "disable_auto_failback": false, 00:23:12.545 "generate_uuids": false, 00:23:12.545 "transport_tos": 0, 00:23:12.545 "nvme_error_stat": false, 00:23:12.545 "rdma_srq_size": 0, 00:23:12.545 "io_path_stat": false, 00:23:12.545 "allow_accel_sequence": false, 00:23:12.545 "rdma_max_cq_size": 0, 00:23:12.545 "rdma_cm_event_timeout_ms": 0, 00:23:12.545 "dhchap_digests": [ 00:23:12.545 "sha256", 00:23:12.545 "sha384", 00:23:12.545 "sha512" 00:23:12.545 ], 00:23:12.545 "dhchap_dhgroups": [ 00:23:12.545 "null", 00:23:12.545 "ffdhe2048", 00:23:12.545 "ffdhe3072", 00:23:12.545 "ffdhe4096", 00:23:12.545 "ffdhe6144", 00:23:12.545 "ffdhe8192" 00:23:12.545 ], 00:23:12.545 "rdma_umr_per_io": false 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "bdev_nvme_set_hotplug", 00:23:12.545 "params": { 00:23:12.545 "period_us": 100000, 00:23:12.545 "enable": false 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "bdev_malloc_create", 00:23:12.545 "params": { 00:23:12.545 "name": "malloc0", 00:23:12.545 "num_blocks": 8192, 00:23:12.545 "block_size": 4096, 00:23:12.545 "physical_block_size": 4096, 00:23:12.545 "uuid": "02c1b8d1-1501-4cf9-8c9e-4aa3e39af6f0", 00:23:12.545 "optimal_io_boundary": 0, 00:23:12.545 "md_size": 0, 00:23:12.545 "dif_type": 0, 00:23:12.545 "dif_is_head_of_md": false, 00:23:12.545 "dif_pi_format": 0 00:23:12.545 } 00:23:12.545 }, 00:23:12.545 { 00:23:12.545 "method": "bdev_wait_for_examine" 00:23:12.546 } 00:23:12.546 ] 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "subsystem": "nbd", 00:23:12.546 "config": [] 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "subsystem": "scheduler", 00:23:12.546 "config": [ 00:23:12.546 { 00:23:12.546 "method": "framework_set_scheduler", 00:23:12.546 "params": { 00:23:12.546 "name": "static" 00:23:12.546 } 00:23:12.546 } 00:23:12.546 ] 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "subsystem": "nvmf", 00:23:12.546 "config": [ 00:23:12.546 { 00:23:12.546 "method": "nvmf_set_config", 00:23:12.546 "params": { 00:23:12.546 "discovery_filter": "match_any", 00:23:12.546 "admin_cmd_passthru": { 00:23:12.546 "identify_ctrlr": false 00:23:12.546 }, 00:23:12.546 "dhchap_digests": [ 00:23:12.546 "sha256", 00:23:12.546 "sha384", 00:23:12.546 "sha512" 00:23:12.546 ], 00:23:12.546 "dhchap_dhgroups": [ 00:23:12.546 "null", 00:23:12.546 "ffdhe2048", 00:23:12.546 "ffdhe3072", 00:23:12.546 "ffdhe4096", 00:23:12.546 "ffdhe6144", 00:23:12.546 "ffdhe8192" 00:23:12.546 ] 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_set_max_subsystems", 00:23:12.546 "params": { 00:23:12.546 "max_subsystems": 1024 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_set_crdt", 00:23:12.546 "params": { 00:23:12.546 "crdt1": 0, 00:23:12.546 "crdt2": 0, 00:23:12.546 "crdt3": 0 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_create_transport", 00:23:12.546 "params": { 00:23:12.546 "trtype": "TCP", 00:23:12.546 "max_queue_depth": 128, 00:23:12.546 "max_io_qpairs_per_ctrlr": 127, 00:23:12.546 "in_capsule_data_size": 4096, 00:23:12.546 "max_io_size": 131072, 00:23:12.546 "io_unit_size": 131072, 00:23:12.546 "max_aq_depth": 128, 00:23:12.546 "num_shared_buffers": 511, 00:23:12.546 "buf_cache_size": 4294967295, 00:23:12.546 "dif_insert_or_strip": false, 00:23:12.546 "zcopy": false, 00:23:12.546 "c2h_success": false, 00:23:12.546 "sock_priority": 0, 00:23:12.546 "abort_timeout_sec": 1, 00:23:12.546 "ack_timeout": 0, 00:23:12.546 "data_wr_pool_size": 0 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_create_subsystem", 00:23:12.546 "params": { 00:23:12.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.546 "allow_any_host": false, 00:23:12.546 "serial_number": "SPDK00000000000001", 00:23:12.546 "model_number": "SPDK bdev Controller", 00:23:12.546 "max_namespaces": 10, 00:23:12.546 "min_cntlid": 1, 00:23:12.546 "max_cntlid": 65519, 00:23:12.546 "ana_reporting": false 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_subsystem_add_host", 00:23:12.546 "params": { 00:23:12.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.546 "host": "nqn.2016-06.io.spdk:host1", 00:23:12.546 "psk": "key0" 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_subsystem_add_ns", 00:23:12.546 "params": { 00:23:12.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.546 "namespace": { 00:23:12.546 "nsid": 1, 00:23:12.546 "bdev_name": "malloc0", 00:23:12.546 "nguid": "02C1B8D115014CF98C9E4AA3E39AF6F0", 00:23:12.546 "uuid": "02c1b8d1-1501-4cf9-8c9e-4aa3e39af6f0", 00:23:12.546 "no_auto_visible": false 00:23:12.546 } 00:23:12.546 } 00:23:12.546 }, 00:23:12.546 { 00:23:12.546 "method": "nvmf_subsystem_add_listener", 00:23:12.546 "params": { 00:23:12.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.546 "listen_address": { 00:23:12.546 "trtype": "TCP", 00:23:12.546 "adrfam": "IPv4", 00:23:12.546 "traddr": "10.0.0.2", 00:23:12.546 "trsvcid": "4420" 00:23:12.546 }, 00:23:12.546 "secure_channel": true 00:23:12.546 } 00:23:12.546 } 00:23:12.546 ] 00:23:12.546 } 00:23:12.546 ] 00:23:12.546 }' 00:23:12.546 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:12.805 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:12.805 "subsystems": [ 00:23:12.805 { 00:23:12.805 "subsystem": "keyring", 00:23:12.805 "config": [ 00:23:12.805 { 00:23:12.805 "method": "keyring_file_add_key", 00:23:12.805 "params": { 00:23:12.805 "name": "key0", 00:23:12.805 "path": "/tmp/tmp.YuToDBlboT" 00:23:12.805 } 00:23:12.805 } 00:23:12.805 ] 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "subsystem": "iobuf", 00:23:12.805 "config": [ 00:23:12.805 { 00:23:12.805 "method": "iobuf_set_options", 00:23:12.805 "params": { 00:23:12.805 "small_pool_count": 8192, 00:23:12.805 "large_pool_count": 1024, 00:23:12.805 "small_bufsize": 8192, 00:23:12.805 "large_bufsize": 135168, 00:23:12.805 "enable_numa": false 00:23:12.805 } 00:23:12.805 } 00:23:12.805 ] 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "subsystem": "sock", 00:23:12.805 "config": [ 00:23:12.805 { 00:23:12.805 "method": "sock_set_default_impl", 00:23:12.805 "params": { 00:23:12.805 "impl_name": "posix" 00:23:12.805 } 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "method": "sock_impl_set_options", 00:23:12.805 "params": { 00:23:12.805 "impl_name": "ssl", 00:23:12.805 "recv_buf_size": 4096, 00:23:12.805 "send_buf_size": 4096, 00:23:12.805 "enable_recv_pipe": true, 00:23:12.805 "enable_quickack": false, 00:23:12.805 "enable_placement_id": 0, 00:23:12.805 "enable_zerocopy_send_server": true, 00:23:12.805 "enable_zerocopy_send_client": false, 00:23:12.805 "zerocopy_threshold": 0, 00:23:12.805 "tls_version": 0, 00:23:12.805 "enable_ktls": false 00:23:12.805 } 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "method": "sock_impl_set_options", 00:23:12.805 "params": { 00:23:12.805 "impl_name": "posix", 00:23:12.805 "recv_buf_size": 2097152, 00:23:12.805 "send_buf_size": 2097152, 00:23:12.805 "enable_recv_pipe": true, 00:23:12.805 "enable_quickack": false, 00:23:12.805 "enable_placement_id": 0, 00:23:12.805 "enable_zerocopy_send_server": true, 00:23:12.805 "enable_zerocopy_send_client": false, 00:23:12.805 "zerocopy_threshold": 0, 00:23:12.805 "tls_version": 0, 00:23:12.805 "enable_ktls": false 00:23:12.805 } 00:23:12.805 } 00:23:12.805 ] 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "subsystem": "vmd", 00:23:12.805 "config": [] 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "subsystem": "accel", 00:23:12.805 "config": [ 00:23:12.805 { 00:23:12.805 "method": "accel_set_options", 00:23:12.805 "params": { 00:23:12.805 "small_cache_size": 128, 00:23:12.805 "large_cache_size": 16, 00:23:12.805 "task_count": 2048, 00:23:12.805 "sequence_count": 2048, 00:23:12.805 "buf_count": 2048 00:23:12.805 } 00:23:12.805 } 00:23:12.805 ] 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "subsystem": "bdev", 00:23:12.805 "config": [ 00:23:12.805 { 00:23:12.805 "method": "bdev_set_options", 00:23:12.805 "params": { 00:23:12.805 "bdev_io_pool_size": 65535, 00:23:12.805 "bdev_io_cache_size": 256, 00:23:12.805 "bdev_auto_examine": true, 00:23:12.805 "iobuf_small_cache_size": 128, 00:23:12.805 "iobuf_large_cache_size": 16 00:23:12.805 } 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "method": "bdev_raid_set_options", 00:23:12.805 "params": { 00:23:12.805 "process_window_size_kb": 1024, 00:23:12.805 "process_max_bandwidth_mb_sec": 0 00:23:12.805 } 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "method": "bdev_iscsi_set_options", 00:23:12.805 "params": { 00:23:12.805 "timeout_sec": 30 00:23:12.805 } 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "method": "bdev_nvme_set_options", 00:23:12.805 "params": { 00:23:12.805 "action_on_timeout": "none", 00:23:12.805 "timeout_us": 0, 00:23:12.805 "timeout_admin_us": 0, 00:23:12.805 "keep_alive_timeout_ms": 10000, 00:23:12.805 "arbitration_burst": 0, 00:23:12.805 "low_priority_weight": 0, 00:23:12.805 "medium_priority_weight": 0, 00:23:12.805 "high_priority_weight": 0, 00:23:12.805 "nvme_adminq_poll_period_us": 10000, 00:23:12.805 "nvme_ioq_poll_period_us": 0, 00:23:12.805 "io_queue_requests": 512, 00:23:12.805 "delay_cmd_submit": true, 00:23:12.805 "transport_retry_count": 4, 00:23:12.805 "bdev_retry_count": 3, 00:23:12.805 "transport_ack_timeout": 0, 00:23:12.805 "ctrlr_loss_timeout_sec": 0, 00:23:12.805 "reconnect_delay_sec": 0, 00:23:12.805 "fast_io_fail_timeout_sec": 0, 00:23:12.805 "disable_auto_failback": false, 00:23:12.805 "generate_uuids": false, 00:23:12.805 "transport_tos": 0, 00:23:12.805 "nvme_error_stat": false, 00:23:12.805 "rdma_srq_size": 0, 00:23:12.805 "io_path_stat": false, 00:23:12.805 "allow_accel_sequence": false, 00:23:12.805 "rdma_max_cq_size": 0, 00:23:12.805 "rdma_cm_event_timeout_ms": 0, 00:23:12.805 "dhchap_digests": [ 00:23:12.805 "sha256", 00:23:12.805 "sha384", 00:23:12.805 "sha512" 00:23:12.805 ], 00:23:12.805 "dhchap_dhgroups": [ 00:23:12.805 "null", 00:23:12.805 "ffdhe2048", 00:23:12.805 "ffdhe3072", 00:23:12.805 "ffdhe4096", 00:23:12.805 "ffdhe6144", 00:23:12.805 "ffdhe8192" 00:23:12.805 ], 00:23:12.805 "rdma_umr_per_io": false 00:23:12.805 } 00:23:12.805 }, 00:23:12.805 { 00:23:12.805 "method": "bdev_nvme_attach_controller", 00:23:12.805 "params": { 00:23:12.805 "name": "TLSTEST", 00:23:12.805 "trtype": "TCP", 00:23:12.805 "adrfam": "IPv4", 00:23:12.805 "traddr": "10.0.0.2", 00:23:12.805 "trsvcid": "4420", 00:23:12.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.806 "prchk_reftag": false, 00:23:12.806 "prchk_guard": false, 00:23:12.806 "ctrlr_loss_timeout_sec": 0, 00:23:12.806 "reconnect_delay_sec": 0, 00:23:12.806 "fast_io_fail_timeout_sec": 0, 00:23:12.806 "psk": "key0", 00:23:12.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.806 "hdgst": false, 00:23:12.806 "ddgst": false, 00:23:12.806 "multipath": "multipath" 00:23:12.806 } 00:23:12.806 }, 00:23:12.806 { 00:23:12.806 "method": "bdev_nvme_set_hotplug", 00:23:12.806 "params": { 00:23:12.806 "period_us": 100000, 00:23:12.806 "enable": false 00:23:12.806 } 00:23:12.806 }, 00:23:12.806 { 00:23:12.806 "method": "bdev_wait_for_examine" 00:23:12.806 } 00:23:12.806 ] 00:23:12.806 }, 00:23:12.806 { 00:23:12.806 "subsystem": "nbd", 00:23:12.806 "config": [] 00:23:12.806 } 00:23:12.806 ] 00:23:12.806 }' 00:23:12.806 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1021474 00:23:12.806 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021474 ']' 00:23:12.806 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021474 00:23:12.806 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:12.806 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.806 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021474 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021474' 00:23:13.065 killing process with pid 1021474 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021474 00:23:13.065 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.065 00:23:13.065 Latency(us) 00:23:13.065 [2024-12-16T15:29:01.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.065 [2024-12-16T15:29:01.674Z] =================================================================================================================== 00:23:13.065 [2024-12-16T15:29:01.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021474 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1021205 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021205 ']' 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021205 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.065 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.066 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021205 00:23:13.066 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.066 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.066 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021205' 00:23:13.066 killing process with pid 1021205 00:23:13.066 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021205 00:23:13.066 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021205 00:23:13.326 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:13.326 "subsystems": [ 00:23:13.326 { 00:23:13.326 "subsystem": "keyring", 00:23:13.326 "config": [ 00:23:13.326 { 00:23:13.326 "method": "keyring_file_add_key", 00:23:13.326 "params": { 00:23:13.326 "name": "key0", 00:23:13.326 "path": "/tmp/tmp.YuToDBlboT" 00:23:13.326 } 00:23:13.326 } 00:23:13.326 ] 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "subsystem": "iobuf", 00:23:13.326 "config": [ 00:23:13.326 { 00:23:13.326 "method": "iobuf_set_options", 00:23:13.326 "params": { 00:23:13.326 "small_pool_count": 8192, 00:23:13.326 "large_pool_count": 1024, 00:23:13.326 "small_bufsize": 8192, 00:23:13.326 "large_bufsize": 135168, 00:23:13.326 "enable_numa": false 00:23:13.326 } 00:23:13.326 } 00:23:13.326 ] 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "subsystem": "sock", 00:23:13.326 "config": [ 00:23:13.326 { 00:23:13.326 "method": "sock_set_default_impl", 00:23:13.326 "params": { 00:23:13.326 "impl_name": "posix" 00:23:13.326 } 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "method": "sock_impl_set_options", 00:23:13.326 "params": { 00:23:13.326 "impl_name": "ssl", 00:23:13.326 "recv_buf_size": 4096, 00:23:13.326 "send_buf_size": 4096, 00:23:13.326 "enable_recv_pipe": true, 00:23:13.326 "enable_quickack": false, 00:23:13.326 "enable_placement_id": 0, 00:23:13.326 "enable_zerocopy_send_server": true, 00:23:13.326 "enable_zerocopy_send_client": false, 00:23:13.326 "zerocopy_threshold": 0, 00:23:13.326 "tls_version": 0, 00:23:13.326 "enable_ktls": false 00:23:13.326 } 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "method": "sock_impl_set_options", 00:23:13.326 "params": { 00:23:13.326 "impl_name": "posix", 00:23:13.326 "recv_buf_size": 2097152, 00:23:13.326 "send_buf_size": 2097152, 00:23:13.326 "enable_recv_pipe": true, 00:23:13.326 "enable_quickack": false, 00:23:13.326 "enable_placement_id": 0, 00:23:13.326 "enable_zerocopy_send_server": true, 00:23:13.326 "enable_zerocopy_send_client": false, 00:23:13.326 "zerocopy_threshold": 0, 00:23:13.326 "tls_version": 0, 00:23:13.326 "enable_ktls": false 00:23:13.326 } 00:23:13.326 } 00:23:13.326 ] 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "subsystem": "vmd", 00:23:13.326 "config": [] 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "subsystem": "accel", 00:23:13.326 "config": [ 00:23:13.326 { 00:23:13.326 "method": "accel_set_options", 00:23:13.326 "params": { 00:23:13.326 "small_cache_size": 128, 00:23:13.326 "large_cache_size": 16, 00:23:13.326 "task_count": 2048, 00:23:13.326 "sequence_count": 2048, 00:23:13.326 "buf_count": 2048 00:23:13.326 } 00:23:13.326 } 00:23:13.326 ] 00:23:13.326 }, 00:23:13.326 { 00:23:13.326 "subsystem": "bdev", 00:23:13.326 "config": [ 00:23:13.326 { 00:23:13.326 "method": "bdev_set_options", 00:23:13.326 "params": { 00:23:13.326 "bdev_io_pool_size": 65535, 00:23:13.326 "bdev_io_cache_size": 256, 00:23:13.327 "bdev_auto_examine": true, 00:23:13.327 "iobuf_small_cache_size": 128, 00:23:13.327 "iobuf_large_cache_size": 16 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "bdev_raid_set_options", 00:23:13.327 "params": { 00:23:13.327 "process_window_size_kb": 1024, 00:23:13.327 "process_max_bandwidth_mb_sec": 0 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "bdev_iscsi_set_options", 00:23:13.327 "params": { 00:23:13.327 "timeout_sec": 30 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "bdev_nvme_set_options", 00:23:13.327 "params": { 00:23:13.327 "action_on_timeout": "none", 00:23:13.327 "timeout_us": 0, 00:23:13.327 "timeout_admin_us": 0, 00:23:13.327 "keep_alive_timeout_ms": 10000, 00:23:13.327 "arbitration_burst": 0, 00:23:13.327 "low_priority_weight": 0, 00:23:13.327 "medium_priority_weight": 0, 00:23:13.327 "high_priority_weight": 0, 00:23:13.327 "nvme_adminq_poll_period_us": 10000, 00:23:13.327 "nvme_ioq_poll_period_us": 0, 00:23:13.327 "io_queue_requests": 0, 00:23:13.327 "delay_cmd_submit": true, 00:23:13.327 "transport_retry_count": 4, 00:23:13.327 "bdev_retry_count": 3, 00:23:13.327 "transport_ack_timeout": 0, 00:23:13.327 "ctrlr_loss_timeout_sec": 0, 00:23:13.327 "reconnect_delay_sec": 0, 00:23:13.327 "fast_io_fail_timeout_sec": 0, 00:23:13.327 "disable_auto_failback": false, 00:23:13.327 "generate_uuids": false, 00:23:13.327 "transport_tos": 0, 00:23:13.327 "nvme_error_stat": false, 00:23:13.327 "rdma_srq_size": 0, 00:23:13.327 "io_path_stat": false, 00:23:13.327 "allow_accel_sequence": false, 00:23:13.327 "rdma_max_cq_size": 0, 00:23:13.327 "rdma_cm_event_timeout_ms": 0, 00:23:13.327 "dhchap_digests": [ 00:23:13.327 "sha256", 00:23:13.327 "sha384", 00:23:13.327 "sha512" 00:23:13.327 ], 00:23:13.327 "dhchap_dhgroups": [ 00:23:13.327 "null", 00:23:13.327 "ffdhe2048", 00:23:13.327 "ffdhe3072", 00:23:13.327 "ffdhe4096", 00:23:13.327 "ffdhe6144", 00:23:13.327 "ffdhe8192" 00:23:13.327 ], 00:23:13.327 "rdma_umr_per_io": false 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "bdev_nvme_set_hotplug", 00:23:13.327 "params": { 00:23:13.327 "period_us": 100000, 00:23:13.327 "enable": false 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "bdev_malloc_create", 00:23:13.327 "params": { 00:23:13.327 "name": "malloc0", 00:23:13.327 "num_blocks": 8192, 00:23:13.327 "block_size": 4096, 00:23:13.327 "physical_block_size": 4096, 00:23:13.327 "uuid": "02c1b8d1-1501-4cf9-8c9e-4aa3e39af6f0", 00:23:13.327 "optimal_io_boundary": 0, 00:23:13.327 "md_size": 0, 00:23:13.327 "dif_type": 0, 00:23:13.327 "dif_is_head_of_md": false, 00:23:13.327 "dif_pi_format": 0 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "bdev_wait_for_examine" 00:23:13.327 } 00:23:13.327 ] 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "subsystem": "nbd", 00:23:13.327 "config": [] 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "subsystem": "scheduler", 00:23:13.327 "config": [ 00:23:13.327 { 00:23:13.327 "method": "framework_set_scheduler", 00:23:13.327 "params": { 00:23:13.327 "name": "static" 00:23:13.327 } 00:23:13.327 } 00:23:13.327 ] 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "subsystem": "nvmf", 00:23:13.327 "config": [ 00:23:13.327 { 00:23:13.327 "method": "nvmf_set_config", 00:23:13.327 "params": { 00:23:13.327 "discovery_filter": "match_any", 00:23:13.327 "admin_cmd_passthru": { 00:23:13.327 "identify_ctrlr": false 00:23:13.327 }, 00:23:13.327 "dhchap_digests": [ 00:23:13.327 "sha256", 00:23:13.327 "sha384", 00:23:13.327 "sha512" 00:23:13.327 ], 00:23:13.327 "dhchap_dhgroups": [ 00:23:13.327 "null", 00:23:13.327 "ffdhe2048", 00:23:13.327 "ffdhe3072", 00:23:13.327 "ffdhe4096", 00:23:13.327 "ffdhe6144", 00:23:13.327 "ffdhe8192" 00:23:13.327 ] 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "nvmf_set_max_subsystems", 00:23:13.327 "params": { 00:23:13.327 "max_subsystems": 1024 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "nvmf_set_crdt", 00:23:13.327 "params": { 00:23:13.327 "crdt1": 0, 00:23:13.327 "crdt2": 0, 00:23:13.327 "crdt3": 0 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "nvmf_create_transport", 00:23:13.327 "params": { 00:23:13.327 "trtype": "TCP", 00:23:13.327 "max_queue_depth": 128, 00:23:13.327 "max_io_qpairs_per_ctrlr": 127, 00:23:13.327 "in_capsule_data_size": 4096, 00:23:13.327 "max_io_size": 131072, 00:23:13.327 "io_unit_size": 131072, 00:23:13.327 "max_aq_depth": 128, 00:23:13.327 "num_shared_buffers": 511, 00:23:13.327 "buf_cache_size": 4294967295, 00:23:13.327 "dif_insert_or_strip": false, 00:23:13.327 "zcopy": false, 00:23:13.327 "c2h_success": false, 00:23:13.327 "sock_priority": 0, 00:23:13.327 "abort_timeout_sec": 1, 00:23:13.327 "ack_timeout": 0, 00:23:13.327 "data_wr_pool_size": 0 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "nvmf_create_subsystem", 00:23:13.327 "params": { 00:23:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.327 "allow_any_host": false, 00:23:13.327 "serial_number": "SPDK00000000000001", 00:23:13.327 "model_number": "SPDK bdev Controller", 00:23:13.327 "max_namespaces": 10, 00:23:13.327 "min_cntlid": 1, 00:23:13.327 "max_cntlid": 65519, 00:23:13.327 "ana_reporting": false 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "nvmf_subsystem_add_host", 00:23:13.327 "params": { 00:23:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.327 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.327 "psk": "key0" 00:23:13.327 } 00:23:13.327 }, 00:23:13.327 { 00:23:13.327 "method": "nvmf_subsystem_add_ns", 00:23:13.327 "params": { 00:23:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.327 "namespace": { 00:23:13.327 "nsid": 1, 00:23:13.327 "bdev_name": "malloc0", 00:23:13.327 "nguid": "02C1B8D115014CF98C9E4AA3E39AF6F0", 00:23:13.327 "uuid": "02c1b8d1-1501-4cf9-8c9e-4aa3e39af6f0", 00:23:13.327 "no_auto_visible": false 00:23:13.327 } 00:23:13.327 } 00:23:13.328 }, 00:23:13.328 { 00:23:13.328 "method": "nvmf_subsystem_add_listener", 00:23:13.328 "params": { 00:23:13.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.328 "listen_address": { 00:23:13.328 "trtype": "TCP", 00:23:13.328 "adrfam": "IPv4", 00:23:13.328 "traddr": "10.0.0.2", 00:23:13.328 "trsvcid": "4420" 00:23:13.328 }, 00:23:13.328 "secure_channel": true 00:23:13.328 } 00:23:13.328 } 00:23:13.328 ] 00:23:13.328 } 00:23:13.328 ] 00:23:13.328 }' 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021719 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021719 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021719 ']' 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.328 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.328 [2024-12-16 16:29:01.832413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:13.328 [2024-12-16 16:29:01.832463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.328 [2024-12-16 16:29:01.909383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.328 [2024-12-16 16:29:01.928558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.328 [2024-12-16 16:29:01.928593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.328 [2024-12-16 16:29:01.928601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.328 [2024-12-16 16:29:01.928607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.328 [2024-12-16 16:29:01.928612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.328 [2024-12-16 16:29:01.929146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.587 [2024-12-16 16:29:02.136832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.587 [2024-12-16 16:29:02.168856] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.587 [2024-12-16 16:29:02.169051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.155 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.155 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1021959 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1021959 /var/tmp/bdevperf.sock 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021959 ']' 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.156 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:14.156 "subsystems": [ 00:23:14.156 { 00:23:14.156 "subsystem": "keyring", 00:23:14.156 "config": [ 00:23:14.156 { 00:23:14.156 "method": "keyring_file_add_key", 00:23:14.156 "params": { 00:23:14.156 "name": "key0", 00:23:14.156 "path": "/tmp/tmp.YuToDBlboT" 00:23:14.156 } 00:23:14.156 } 00:23:14.156 ] 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "subsystem": "iobuf", 00:23:14.156 "config": [ 00:23:14.156 { 00:23:14.156 "method": "iobuf_set_options", 00:23:14.156 "params": { 00:23:14.156 "small_pool_count": 8192, 00:23:14.156 "large_pool_count": 1024, 00:23:14.156 "small_bufsize": 8192, 00:23:14.156 "large_bufsize": 135168, 00:23:14.156 "enable_numa": false 00:23:14.156 } 00:23:14.156 } 00:23:14.156 ] 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "subsystem": "sock", 00:23:14.156 "config": [ 00:23:14.156 { 00:23:14.156 "method": "sock_set_default_impl", 00:23:14.156 "params": { 00:23:14.156 "impl_name": "posix" 00:23:14.156 } 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "method": "sock_impl_set_options", 00:23:14.156 "params": { 00:23:14.156 "impl_name": "ssl", 00:23:14.156 "recv_buf_size": 4096, 00:23:14.156 "send_buf_size": 4096, 00:23:14.156 "enable_recv_pipe": true, 00:23:14.156 "enable_quickack": false, 00:23:14.156 "enable_placement_id": 0, 00:23:14.156 "enable_zerocopy_send_server": true, 00:23:14.156 "enable_zerocopy_send_client": false, 00:23:14.156 "zerocopy_threshold": 0, 00:23:14.156 "tls_version": 0, 00:23:14.156 "enable_ktls": false 00:23:14.156 } 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "method": "sock_impl_set_options", 00:23:14.156 "params": { 00:23:14.156 "impl_name": "posix", 00:23:14.156 "recv_buf_size": 2097152, 00:23:14.156 "send_buf_size": 2097152, 00:23:14.156 "enable_recv_pipe": true, 00:23:14.156 "enable_quickack": false, 00:23:14.156 "enable_placement_id": 0, 00:23:14.156 "enable_zerocopy_send_server": true, 00:23:14.156 "enable_zerocopy_send_client": false, 00:23:14.156 "zerocopy_threshold": 0, 00:23:14.156 "tls_version": 0, 00:23:14.156 "enable_ktls": false 00:23:14.156 } 00:23:14.156 } 00:23:14.156 ] 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "subsystem": "vmd", 00:23:14.156 "config": [] 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "subsystem": "accel", 00:23:14.156 "config": [ 00:23:14.156 { 00:23:14.156 "method": "accel_set_options", 00:23:14.156 "params": { 00:23:14.156 "small_cache_size": 128, 00:23:14.156 "large_cache_size": 16, 00:23:14.156 "task_count": 2048, 00:23:14.156 "sequence_count": 2048, 00:23:14.156 "buf_count": 2048 00:23:14.156 } 00:23:14.156 } 00:23:14.156 ] 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "subsystem": "bdev", 00:23:14.156 "config": [ 00:23:14.156 { 00:23:14.156 "method": "bdev_set_options", 00:23:14.156 "params": { 00:23:14.156 "bdev_io_pool_size": 65535, 00:23:14.156 "bdev_io_cache_size": 256, 00:23:14.156 "bdev_auto_examine": true, 00:23:14.156 "iobuf_small_cache_size": 128, 00:23:14.156 "iobuf_large_cache_size": 16 00:23:14.156 } 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "method": "bdev_raid_set_options", 00:23:14.156 "params": { 00:23:14.156 "process_window_size_kb": 1024, 00:23:14.156 "process_max_bandwidth_mb_sec": 0 00:23:14.156 } 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "method": "bdev_iscsi_set_options", 00:23:14.156 "params": { 00:23:14.156 "timeout_sec": 30 00:23:14.156 } 00:23:14.156 }, 00:23:14.156 { 00:23:14.156 "method": "bdev_nvme_set_options", 00:23:14.156 "params": { 00:23:14.156 "action_on_timeout": "none", 00:23:14.156 "timeout_us": 0, 00:23:14.156 "timeout_admin_us": 0, 00:23:14.156 "keep_alive_timeout_ms": 10000, 00:23:14.156 "arbitration_burst": 0, 00:23:14.156 "low_priority_weight": 0, 00:23:14.156 "medium_priority_weight": 0, 00:23:14.156 "high_priority_weight": 0, 00:23:14.156 "nvme_adminq_poll_period_us": 10000, 00:23:14.156 "nvme_ioq_poll_period_us": 0, 00:23:14.156 "io_queue_requests": 512, 00:23:14.156 "delay_cmd_submit": true, 00:23:14.156 "transport_retry_count": 4, 00:23:14.156 "bdev_retry_count": 3, 00:23:14.156 "transport_ack_timeout": 0, 00:23:14.156 "ctrlr_loss_timeout_sec": 0, 00:23:14.156 "reconnect_delay_sec": 0, 00:23:14.156 "fast_io_fail_timeout_sec": 0, 00:23:14.156 "disable_auto_failback": false, 00:23:14.156 "generate_uuids": false, 00:23:14.156 "transport_tos": 0, 00:23:14.156 "nvme_error_stat": false, 00:23:14.156 "rdma_srq_size": 0, 00:23:14.156 "io_path_stat": false, 00:23:14.156 "allow_accel_sequence": false, 00:23:14.156 "rdma_max_cq_size": 0, 00:23:14.156 "rdma_cm_event_timeout_ms": 0, 00:23:14.156 "dhchap_digests": [ 00:23:14.156 "sha256", 00:23:14.156 "sha384", 00:23:14.156 "sha512" 00:23:14.156 ], 00:23:14.156 "dhchap_dhgroups": [ 00:23:14.156 "null", 00:23:14.156 "ffdhe2048", 00:23:14.156 "ffdhe3072", 00:23:14.156 "ffdhe4096", 00:23:14.156 "ffdhe6144", 00:23:14.156 "ffdhe8192" 00:23:14.156 ], 00:23:14.156 "rdma_umr_per_io": false 00:23:14.156 } 00:23:14.157 }, 00:23:14.157 { 00:23:14.157 "method": "bdev_nvme_attach_controller", 00:23:14.157 "params": { 00:23:14.157 "name": "TLSTEST", 00:23:14.157 "trtype": "TCP", 00:23:14.157 "adrfam": "IPv4", 00:23:14.157 "traddr": "10.0.0.2", 00:23:14.157 "trsvcid": "4420", 00:23:14.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.157 "prchk_reftag": false, 00:23:14.157 "prchk_guard": false, 00:23:14.157 "ctrlr_loss_timeout_sec": 0, 00:23:14.157 "reconnect_delay_sec": 0, 00:23:14.157 "fast_io_fail_timeout_sec": 0, 00:23:14.157 "psk": "key0", 00:23:14.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.157 "hdgst": false, 00:23:14.157 "ddgst": false, 00:23:14.157 "multipath": "multipath" 00:23:14.157 } 00:23:14.157 }, 00:23:14.157 { 00:23:14.157 "method": "bdev_nvme_set_hotplug", 00:23:14.157 "params": { 00:23:14.157 "period_us": 100000, 00:23:14.157 "enable": false 00:23:14.157 } 00:23:14.157 }, 00:23:14.157 { 00:23:14.157 "method": "bdev_wait_for_examine" 00:23:14.157 } 00:23:14.157 ] 00:23:14.157 }, 00:23:14.157 { 00:23:14.157 "subsystem": "nbd", 00:23:14.157 "config": [] 00:23:14.157 } 00:23:14.157 ] 00:23:14.157 }' 00:23:14.157 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.157 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.157 [2024-12-16 16:29:02.761654] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:14.157 [2024-12-16 16:29:02.761697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021959 ] 00:23:14.416 [2024-12-16 16:29:02.837428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.416 [2024-12-16 16:29:02.860013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.416 [2024-12-16 16:29:03.007657] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.354 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.354 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.354 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:15.354 Running I/O for 10 seconds... 00:23:17.227 5431.00 IOPS, 21.21 MiB/s [2024-12-16T15:29:06.859Z] 5543.00 IOPS, 21.65 MiB/s [2024-12-16T15:29:07.868Z] 5529.00 IOPS, 21.60 MiB/s [2024-12-16T15:29:08.805Z] 5532.50 IOPS, 21.61 MiB/s [2024-12-16T15:29:09.742Z] 5541.60 IOPS, 21.65 MiB/s [2024-12-16T15:29:11.121Z] 5554.67 IOPS, 21.70 MiB/s [2024-12-16T15:29:12.059Z] 5549.00 IOPS, 21.68 MiB/s [2024-12-16T15:29:12.996Z] 5542.12 IOPS, 21.65 MiB/s [2024-12-16T15:29:13.933Z] 5526.11 IOPS, 21.59 MiB/s [2024-12-16T15:29:13.934Z] 5536.40 IOPS, 21.63 MiB/s 00:23:25.325 Latency(us) 00:23:25.325 [2024-12-16T15:29:13.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.325 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:25.325 Verification LBA range: start 0x0 length 0x2000 00:23:25.325 TLSTESTn1 : 10.01 5541.87 21.65 0.00 0.00 23063.46 5430.13 40195.41 00:23:25.325 [2024-12-16T15:29:13.934Z] =================================================================================================================== 00:23:25.325 [2024-12-16T15:29:13.934Z] Total : 5541.87 21.65 0.00 0.00 23063.46 5430.13 40195.41 00:23:25.325 { 00:23:25.325 "results": [ 00:23:25.325 { 00:23:25.325 "job": "TLSTESTn1", 00:23:25.325 "core_mask": "0x4", 00:23:25.325 "workload": "verify", 00:23:25.325 "status": "finished", 00:23:25.325 "verify_range": { 00:23:25.325 "start": 0, 00:23:25.325 "length": 8192 00:23:25.325 }, 00:23:25.325 "queue_depth": 128, 00:23:25.325 "io_size": 4096, 00:23:25.325 "runtime": 10.013051, 00:23:25.325 "iops": 5541.867308975056, 00:23:25.325 "mibps": 21.647919175683814, 00:23:25.325 "io_failed": 0, 00:23:25.325 "io_timeout": 0, 00:23:25.325 "avg_latency_us": 23063.457120305222, 00:23:25.325 "min_latency_us": 5430.125714285714, 00:23:25.325 "max_latency_us": 40195.41333333333 00:23:25.325 } 00:23:25.325 ], 00:23:25.325 "core_count": 1 00:23:25.325 } 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1021959 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021959 ']' 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021959 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021959 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021959' 00:23:25.325 killing process with pid 1021959 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021959 00:23:25.325 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.325 00:23:25.325 Latency(us) 00:23:25.325 [2024-12-16T15:29:13.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.325 [2024-12-16T15:29:13.934Z] =================================================================================================================== 00:23:25.325 [2024-12-16T15:29:13.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.325 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021959 00:23:25.585 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1021719 00:23:25.585 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021719 ']' 00:23:25.585 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021719 00:23:25.585 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.585 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.585 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021719 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021719' 00:23:25.585 killing process with pid 1021719 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021719 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021719 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1023759 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1023759 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1023759 ']' 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.585 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.845 [2024-12-16 16:29:14.223600] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:25.845 [2024-12-16 16:29:14.223648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.845 [2024-12-16 16:29:14.294512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.845 [2024-12-16 16:29:14.323016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.845 [2024-12-16 16:29:14.323060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.845 [2024-12-16 16:29:14.323072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.845 [2024-12-16 16:29:14.323081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.845 [2024-12-16 16:29:14.323111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.845 [2024-12-16 16:29:14.323783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.845 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.845 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.845 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.845 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.845 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.104 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.104 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.YuToDBlboT 00:23:26.104 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YuToDBlboT 00:23:26.104 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.104 [2024-12-16 16:29:14.647015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.104 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.363 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:26.623 [2024-12-16 16:29:15.040004] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.623 [2024-12-16 16:29:15.040213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.623 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.882 malloc0 00:23:26.882 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.140 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:27.140 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1024016 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1024016 /var/tmp/bdevperf.sock 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024016 ']' 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.399 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.399 [2024-12-16 16:29:15.927847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:27.399 [2024-12-16 16:29:15.927897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024016 ] 00:23:27.399 [2024-12-16 16:29:16.002256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.659 [2024-12-16 16:29:16.024295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.659 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.659 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.659 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:27.918 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:27.918 [2024-12-16 16:29:16.478900] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.177 nvme0n1 00:23:28.177 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.177 Running I/O for 1 seconds... 00:23:29.114 5155.00 IOPS, 20.14 MiB/s 00:23:29.114 Latency(us) 00:23:29.114 [2024-12-16T15:29:17.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.114 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:29.114 Verification LBA range: start 0x0 length 0x2000 00:23:29.114 nvme0n1 : 1.02 5198.00 20.30 0.00 0.00 24447.96 4868.39 32955.25 00:23:29.114 [2024-12-16T15:29:17.723Z] =================================================================================================================== 00:23:29.114 [2024-12-16T15:29:17.723Z] Total : 5198.00 20.30 0.00 0.00 24447.96 4868.39 32955.25 00:23:29.114 { 00:23:29.114 "results": [ 00:23:29.114 { 00:23:29.114 "job": "nvme0n1", 00:23:29.114 "core_mask": "0x2", 00:23:29.114 "workload": "verify", 00:23:29.114 "status": "finished", 00:23:29.114 "verify_range": { 00:23:29.114 "start": 0, 00:23:29.114 "length": 8192 00:23:29.114 }, 00:23:29.114 "queue_depth": 128, 00:23:29.114 "io_size": 4096, 00:23:29.114 "runtime": 1.016353, 00:23:29.114 "iops": 5197.997152564119, 00:23:29.114 "mibps": 20.304676377203588, 00:23:29.114 "io_failed": 0, 00:23:29.114 "io_timeout": 0, 00:23:29.114 "avg_latency_us": 24447.964879983414, 00:23:29.114 "min_latency_us": 4868.388571428572, 00:23:29.114 "max_latency_us": 32955.24571428572 00:23:29.114 } 00:23:29.114 ], 00:23:29.114 "core_count": 1 00:23:29.114 } 00:23:29.114 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1024016 00:23:29.114 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024016 ']' 00:23:29.114 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024016 00:23:29.114 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.114 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.114 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024016 00:23:29.373 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:29.373 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:29.373 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024016' 00:23:29.373 killing process with pid 1024016 00:23:29.373 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024016 00:23:29.373 Received shutdown signal, test time was about 1.000000 seconds 00:23:29.373 00:23:29.373 Latency(us) 00:23:29.373 [2024-12-16T15:29:17.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.373 [2024-12-16T15:29:17.983Z] =================================================================================================================== 00:23:29.374 [2024-12-16T15:29:17.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024016 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1023759 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1023759 ']' 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1023759 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1023759 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1023759' 00:23:29.374 killing process with pid 1023759 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1023759 00:23:29.374 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1023759 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024404 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024404 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024404 ']' 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.633 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.633 [2024-12-16 16:29:18.181043] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:29.633 [2024-12-16 16:29:18.181101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.893 [2024-12-16 16:29:18.261357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.893 [2024-12-16 16:29:18.280930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.893 [2024-12-16 16:29:18.280967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.893 [2024-12-16 16:29:18.280974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.893 [2024-12-16 16:29:18.280979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.893 [2024-12-16 16:29:18.280984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.893 [2024-12-16 16:29:18.281517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.893 [2024-12-16 16:29:18.423318] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.893 malloc0 00:23:29.893 [2024-12-16 16:29:18.451260] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.893 [2024-12-16 16:29:18.451448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1024486 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1024486 /var/tmp/bdevperf.sock 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024486 ']' 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.893 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.153 [2024-12-16 16:29:18.526962] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:30.153 [2024-12-16 16:29:18.527001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024486 ] 00:23:30.153 [2024-12-16 16:29:18.603722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.153 [2024-12-16 16:29:18.626171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.153 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.153 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.153 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YuToDBlboT 00:23:30.412 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:30.672 [2024-12-16 16:29:19.065723] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.672 nvme0n1 00:23:30.672 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.672 Running I/O for 1 seconds... 00:23:32.050 5399.00 IOPS, 21.09 MiB/s 00:23:32.050 Latency(us) 00:23:32.050 [2024-12-16T15:29:20.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.050 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.050 Verification LBA range: start 0x0 length 0x2000 00:23:32.050 nvme0n1 : 1.02 5440.55 21.25 0.00 0.00 23358.80 6740.85 26089.57 00:23:32.050 [2024-12-16T15:29:20.659Z] =================================================================================================================== 00:23:32.050 [2024-12-16T15:29:20.659Z] Total : 5440.55 21.25 0.00 0.00 23358.80 6740.85 26089.57 00:23:32.050 { 00:23:32.050 "results": [ 00:23:32.050 { 00:23:32.050 "job": "nvme0n1", 00:23:32.050 "core_mask": "0x2", 00:23:32.050 "workload": "verify", 00:23:32.050 "status": "finished", 00:23:32.050 "verify_range": { 00:23:32.050 "start": 0, 00:23:32.050 "length": 8192 00:23:32.050 }, 00:23:32.050 "queue_depth": 128, 00:23:32.050 "io_size": 4096, 00:23:32.050 "runtime": 1.01589, 00:23:32.050 "iops": 5440.549665810275, 00:23:32.051 "mibps": 21.252147132071386, 00:23:32.051 "io_failed": 0, 00:23:32.051 "io_timeout": 0, 00:23:32.051 "avg_latency_us": 23358.79557565889, 00:23:32.051 "min_latency_us": 6740.845714285714, 00:23:32.051 "max_latency_us": 26089.569523809525 00:23:32.051 } 00:23:32.051 ], 00:23:32.051 "core_count": 1 00:23:32.051 } 00:23:32.051 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:32.051 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.051 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.051 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.051 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:32.051 "subsystems": [ 00:23:32.051 { 00:23:32.051 "subsystem": "keyring", 00:23:32.051 "config": [ 00:23:32.051 { 00:23:32.051 "method": "keyring_file_add_key", 00:23:32.051 "params": { 00:23:32.051 "name": "key0", 00:23:32.051 "path": "/tmp/tmp.YuToDBlboT" 00:23:32.051 } 00:23:32.051 } 00:23:32.051 ] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "iobuf", 00:23:32.051 "config": [ 00:23:32.051 { 00:23:32.051 "method": "iobuf_set_options", 00:23:32.051 "params": { 00:23:32.051 "small_pool_count": 8192, 00:23:32.051 "large_pool_count": 1024, 00:23:32.051 "small_bufsize": 8192, 00:23:32.051 "large_bufsize": 135168, 00:23:32.051 "enable_numa": false 00:23:32.051 } 00:23:32.051 } 00:23:32.051 ] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "sock", 00:23:32.051 "config": [ 00:23:32.051 { 00:23:32.051 "method": "sock_set_default_impl", 00:23:32.051 "params": { 00:23:32.051 "impl_name": "posix" 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "sock_impl_set_options", 00:23:32.051 "params": { 00:23:32.051 "impl_name": "ssl", 00:23:32.051 "recv_buf_size": 4096, 00:23:32.051 "send_buf_size": 4096, 00:23:32.051 "enable_recv_pipe": true, 00:23:32.051 "enable_quickack": false, 00:23:32.051 "enable_placement_id": 0, 00:23:32.051 "enable_zerocopy_send_server": true, 00:23:32.051 "enable_zerocopy_send_client": false, 00:23:32.051 "zerocopy_threshold": 0, 00:23:32.051 "tls_version": 0, 00:23:32.051 "enable_ktls": false 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "sock_impl_set_options", 00:23:32.051 "params": { 00:23:32.051 "impl_name": "posix", 00:23:32.051 "recv_buf_size": 2097152, 00:23:32.051 "send_buf_size": 2097152, 00:23:32.051 "enable_recv_pipe": true, 00:23:32.051 "enable_quickack": false, 00:23:32.051 "enable_placement_id": 0, 00:23:32.051 "enable_zerocopy_send_server": true, 00:23:32.051 "enable_zerocopy_send_client": false, 00:23:32.051 "zerocopy_threshold": 0, 00:23:32.051 "tls_version": 0, 00:23:32.051 "enable_ktls": false 00:23:32.051 } 00:23:32.051 } 00:23:32.051 ] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "vmd", 00:23:32.051 "config": [] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "accel", 00:23:32.051 "config": [ 00:23:32.051 { 00:23:32.051 "method": "accel_set_options", 00:23:32.051 "params": { 00:23:32.051 "small_cache_size": 128, 00:23:32.051 "large_cache_size": 16, 00:23:32.051 "task_count": 2048, 00:23:32.051 "sequence_count": 2048, 00:23:32.051 "buf_count": 2048 00:23:32.051 } 00:23:32.051 } 00:23:32.051 ] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "bdev", 00:23:32.051 "config": [ 00:23:32.051 { 00:23:32.051 "method": "bdev_set_options", 00:23:32.051 "params": { 00:23:32.051 "bdev_io_pool_size": 65535, 00:23:32.051 "bdev_io_cache_size": 256, 00:23:32.051 "bdev_auto_examine": true, 00:23:32.051 "iobuf_small_cache_size": 128, 00:23:32.051 "iobuf_large_cache_size": 16 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "bdev_raid_set_options", 00:23:32.051 "params": { 00:23:32.051 "process_window_size_kb": 1024, 00:23:32.051 "process_max_bandwidth_mb_sec": 0 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "bdev_iscsi_set_options", 00:23:32.051 "params": { 00:23:32.051 "timeout_sec": 30 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "bdev_nvme_set_options", 00:23:32.051 "params": { 00:23:32.051 "action_on_timeout": "none", 00:23:32.051 "timeout_us": 0, 00:23:32.051 "timeout_admin_us": 0, 00:23:32.051 "keep_alive_timeout_ms": 10000, 00:23:32.051 "arbitration_burst": 0, 00:23:32.051 "low_priority_weight": 0, 00:23:32.051 "medium_priority_weight": 0, 00:23:32.051 "high_priority_weight": 0, 00:23:32.051 "nvme_adminq_poll_period_us": 10000, 00:23:32.051 "nvme_ioq_poll_period_us": 0, 00:23:32.051 "io_queue_requests": 0, 00:23:32.051 "delay_cmd_submit": true, 00:23:32.051 "transport_retry_count": 4, 00:23:32.051 "bdev_retry_count": 3, 00:23:32.051 "transport_ack_timeout": 0, 00:23:32.051 "ctrlr_loss_timeout_sec": 0, 00:23:32.051 "reconnect_delay_sec": 0, 00:23:32.051 "fast_io_fail_timeout_sec": 0, 00:23:32.051 "disable_auto_failback": false, 00:23:32.051 "generate_uuids": false, 00:23:32.051 "transport_tos": 0, 00:23:32.051 "nvme_error_stat": false, 00:23:32.051 "rdma_srq_size": 0, 00:23:32.051 "io_path_stat": false, 00:23:32.051 "allow_accel_sequence": false, 00:23:32.051 "rdma_max_cq_size": 0, 00:23:32.051 "rdma_cm_event_timeout_ms": 0, 00:23:32.051 "dhchap_digests": [ 00:23:32.051 "sha256", 00:23:32.051 "sha384", 00:23:32.051 "sha512" 00:23:32.051 ], 00:23:32.051 "dhchap_dhgroups": [ 00:23:32.051 "null", 00:23:32.051 "ffdhe2048", 00:23:32.051 "ffdhe3072", 00:23:32.051 "ffdhe4096", 00:23:32.051 "ffdhe6144", 00:23:32.051 "ffdhe8192" 00:23:32.051 ], 00:23:32.051 "rdma_umr_per_io": false 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "bdev_nvme_set_hotplug", 00:23:32.051 "params": { 00:23:32.051 "period_us": 100000, 00:23:32.051 "enable": false 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "bdev_malloc_create", 00:23:32.051 "params": { 00:23:32.051 "name": "malloc0", 00:23:32.051 "num_blocks": 8192, 00:23:32.051 "block_size": 4096, 00:23:32.051 "physical_block_size": 4096, 00:23:32.051 "uuid": "7dc560ba-dc57-4962-9acb-08b2e4a65b8a", 00:23:32.051 "optimal_io_boundary": 0, 00:23:32.051 "md_size": 0, 00:23:32.051 "dif_type": 0, 00:23:32.051 "dif_is_head_of_md": false, 00:23:32.051 "dif_pi_format": 0 00:23:32.051 } 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "method": "bdev_wait_for_examine" 00:23:32.051 } 00:23:32.051 ] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "nbd", 00:23:32.051 "config": [] 00:23:32.051 }, 00:23:32.051 { 00:23:32.051 "subsystem": "scheduler", 00:23:32.051 "config": [ 00:23:32.051 { 00:23:32.051 "method": "framework_set_scheduler", 00:23:32.051 "params": { 00:23:32.051 "name": "static" 00:23:32.051 } 00:23:32.052 } 00:23:32.052 ] 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "subsystem": "nvmf", 00:23:32.052 "config": [ 00:23:32.052 { 00:23:32.052 "method": "nvmf_set_config", 00:23:32.052 "params": { 00:23:32.052 "discovery_filter": "match_any", 00:23:32.052 "admin_cmd_passthru": { 00:23:32.052 "identify_ctrlr": false 00:23:32.052 }, 00:23:32.052 "dhchap_digests": [ 00:23:32.052 "sha256", 00:23:32.052 "sha384", 00:23:32.052 "sha512" 00:23:32.052 ], 00:23:32.052 "dhchap_dhgroups": [ 00:23:32.052 "null", 00:23:32.052 "ffdhe2048", 00:23:32.052 "ffdhe3072", 00:23:32.052 "ffdhe4096", 00:23:32.052 "ffdhe6144", 00:23:32.052 "ffdhe8192" 00:23:32.052 ] 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_set_max_subsystems", 00:23:32.052 "params": { 00:23:32.052 "max_subsystems": 1024 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_set_crdt", 00:23:32.052 "params": { 00:23:32.052 "crdt1": 0, 00:23:32.052 "crdt2": 0, 00:23:32.052 "crdt3": 0 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_create_transport", 00:23:32.052 "params": { 00:23:32.052 "trtype": "TCP", 00:23:32.052 "max_queue_depth": 128, 00:23:32.052 "max_io_qpairs_per_ctrlr": 127, 00:23:32.052 "in_capsule_data_size": 4096, 00:23:32.052 "max_io_size": 131072, 00:23:32.052 "io_unit_size": 131072, 00:23:32.052 "max_aq_depth": 128, 00:23:32.052 "num_shared_buffers": 511, 00:23:32.052 "buf_cache_size": 4294967295, 00:23:32.052 "dif_insert_or_strip": false, 00:23:32.052 "zcopy": false, 00:23:32.052 "c2h_success": false, 00:23:32.052 "sock_priority": 0, 00:23:32.052 "abort_timeout_sec": 1, 00:23:32.052 "ack_timeout": 0, 00:23:32.052 "data_wr_pool_size": 0 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_create_subsystem", 00:23:32.052 "params": { 00:23:32.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.052 "allow_any_host": false, 00:23:32.052 "serial_number": "00000000000000000000", 00:23:32.052 "model_number": "SPDK bdev Controller", 00:23:32.052 "max_namespaces": 32, 00:23:32.052 "min_cntlid": 1, 00:23:32.052 "max_cntlid": 65519, 00:23:32.052 "ana_reporting": false 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_subsystem_add_host", 00:23:32.052 "params": { 00:23:32.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.052 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.052 "psk": "key0" 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_subsystem_add_ns", 00:23:32.052 "params": { 00:23:32.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.052 "namespace": { 00:23:32.052 "nsid": 1, 00:23:32.052 "bdev_name": "malloc0", 00:23:32.052 "nguid": "7DC560BADC5749629ACB08B2E4A65B8A", 00:23:32.052 "uuid": "7dc560ba-dc57-4962-9acb-08b2e4a65b8a", 00:23:32.052 "no_auto_visible": false 00:23:32.052 } 00:23:32.052 } 00:23:32.052 }, 00:23:32.052 { 00:23:32.052 "method": "nvmf_subsystem_add_listener", 00:23:32.052 "params": { 00:23:32.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.052 "listen_address": { 00:23:32.052 "trtype": "TCP", 00:23:32.052 "adrfam": "IPv4", 00:23:32.052 "traddr": "10.0.0.2", 00:23:32.052 "trsvcid": "4420" 00:23:32.052 }, 00:23:32.052 "secure_channel": false, 00:23:32.052 "sock_impl": "ssl" 00:23:32.052 } 00:23:32.052 } 00:23:32.052 ] 00:23:32.052 } 00:23:32.052 ] 00:23:32.052 }' 00:23:32.052 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:32.312 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:32.312 "subsystems": [ 00:23:32.312 { 00:23:32.312 "subsystem": "keyring", 00:23:32.312 "config": [ 00:23:32.312 { 00:23:32.312 "method": "keyring_file_add_key", 00:23:32.312 "params": { 00:23:32.312 "name": "key0", 00:23:32.312 "path": "/tmp/tmp.YuToDBlboT" 00:23:32.312 } 00:23:32.312 } 00:23:32.312 ] 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "subsystem": "iobuf", 00:23:32.312 "config": [ 00:23:32.312 { 00:23:32.312 "method": "iobuf_set_options", 00:23:32.312 "params": { 00:23:32.312 "small_pool_count": 8192, 00:23:32.312 "large_pool_count": 1024, 00:23:32.312 "small_bufsize": 8192, 00:23:32.312 "large_bufsize": 135168, 00:23:32.312 "enable_numa": false 00:23:32.312 } 00:23:32.312 } 00:23:32.312 ] 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "subsystem": "sock", 00:23:32.312 "config": [ 00:23:32.312 { 00:23:32.312 "method": "sock_set_default_impl", 00:23:32.312 "params": { 00:23:32.312 "impl_name": "posix" 00:23:32.312 } 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "method": "sock_impl_set_options", 00:23:32.312 "params": { 00:23:32.312 "impl_name": "ssl", 00:23:32.312 "recv_buf_size": 4096, 00:23:32.312 "send_buf_size": 4096, 00:23:32.312 "enable_recv_pipe": true, 00:23:32.312 "enable_quickack": false, 00:23:32.312 "enable_placement_id": 0, 00:23:32.312 "enable_zerocopy_send_server": true, 00:23:32.312 "enable_zerocopy_send_client": false, 00:23:32.312 "zerocopy_threshold": 0, 00:23:32.312 "tls_version": 0, 00:23:32.312 "enable_ktls": false 00:23:32.312 } 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "method": "sock_impl_set_options", 00:23:32.312 "params": { 00:23:32.312 "impl_name": "posix", 00:23:32.312 "recv_buf_size": 2097152, 00:23:32.312 "send_buf_size": 2097152, 00:23:32.312 "enable_recv_pipe": true, 00:23:32.312 "enable_quickack": false, 00:23:32.312 "enable_placement_id": 0, 00:23:32.312 "enable_zerocopy_send_server": true, 00:23:32.312 "enable_zerocopy_send_client": false, 00:23:32.312 "zerocopy_threshold": 0, 00:23:32.312 "tls_version": 0, 00:23:32.312 "enable_ktls": false 00:23:32.312 } 00:23:32.312 } 00:23:32.312 ] 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "subsystem": "vmd", 00:23:32.312 "config": [] 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "subsystem": "accel", 00:23:32.312 "config": [ 00:23:32.312 { 00:23:32.312 "method": "accel_set_options", 00:23:32.312 "params": { 00:23:32.312 "small_cache_size": 128, 00:23:32.312 "large_cache_size": 16, 00:23:32.312 "task_count": 2048, 00:23:32.312 "sequence_count": 2048, 00:23:32.312 "buf_count": 2048 00:23:32.312 } 00:23:32.312 } 00:23:32.312 ] 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "subsystem": "bdev", 00:23:32.312 "config": [ 00:23:32.312 { 00:23:32.312 "method": "bdev_set_options", 00:23:32.312 "params": { 00:23:32.312 "bdev_io_pool_size": 65535, 00:23:32.312 "bdev_io_cache_size": 256, 00:23:32.312 "bdev_auto_examine": true, 00:23:32.312 "iobuf_small_cache_size": 128, 00:23:32.312 "iobuf_large_cache_size": 16 00:23:32.312 } 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "method": "bdev_raid_set_options", 00:23:32.312 "params": { 00:23:32.312 "process_window_size_kb": 1024, 00:23:32.312 "process_max_bandwidth_mb_sec": 0 00:23:32.312 } 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "method": "bdev_iscsi_set_options", 00:23:32.312 "params": { 00:23:32.312 "timeout_sec": 30 00:23:32.312 } 00:23:32.312 }, 00:23:32.312 { 00:23:32.312 "method": "bdev_nvme_set_options", 00:23:32.312 "params": { 00:23:32.312 "action_on_timeout": "none", 00:23:32.313 "timeout_us": 0, 00:23:32.313 "timeout_admin_us": 0, 00:23:32.313 "keep_alive_timeout_ms": 10000, 00:23:32.313 "arbitration_burst": 0, 00:23:32.313 "low_priority_weight": 0, 00:23:32.313 "medium_priority_weight": 0, 00:23:32.313 "high_priority_weight": 0, 00:23:32.313 "nvme_adminq_poll_period_us": 10000, 00:23:32.313 "nvme_ioq_poll_period_us": 0, 00:23:32.313 "io_queue_requests": 512, 00:23:32.313 "delay_cmd_submit": true, 00:23:32.313 "transport_retry_count": 4, 00:23:32.313 "bdev_retry_count": 3, 00:23:32.313 "transport_ack_timeout": 0, 00:23:32.313 "ctrlr_loss_timeout_sec": 0, 00:23:32.313 "reconnect_delay_sec": 0, 00:23:32.313 "fast_io_fail_timeout_sec": 0, 00:23:32.313 "disable_auto_failback": false, 00:23:32.313 "generate_uuids": false, 00:23:32.313 "transport_tos": 0, 00:23:32.313 "nvme_error_stat": false, 00:23:32.313 "rdma_srq_size": 0, 00:23:32.313 "io_path_stat": false, 00:23:32.313 "allow_accel_sequence": false, 00:23:32.313 "rdma_max_cq_size": 0, 00:23:32.313 "rdma_cm_event_timeout_ms": 0, 00:23:32.313 "dhchap_digests": [ 00:23:32.313 "sha256", 00:23:32.313 "sha384", 00:23:32.313 "sha512" 00:23:32.313 ], 00:23:32.313 "dhchap_dhgroups": [ 00:23:32.313 "null", 00:23:32.313 "ffdhe2048", 00:23:32.313 "ffdhe3072", 00:23:32.313 "ffdhe4096", 00:23:32.313 "ffdhe6144", 00:23:32.313 "ffdhe8192" 00:23:32.313 ], 00:23:32.313 "rdma_umr_per_io": false 00:23:32.313 } 00:23:32.313 }, 00:23:32.313 { 00:23:32.313 "method": "bdev_nvme_attach_controller", 00:23:32.313 "params": { 00:23:32.313 "name": "nvme0", 00:23:32.313 "trtype": "TCP", 00:23:32.313 "adrfam": "IPv4", 00:23:32.313 "traddr": "10.0.0.2", 00:23:32.313 "trsvcid": "4420", 00:23:32.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.313 "prchk_reftag": false, 00:23:32.313 "prchk_guard": false, 00:23:32.313 "ctrlr_loss_timeout_sec": 0, 00:23:32.313 "reconnect_delay_sec": 0, 00:23:32.313 "fast_io_fail_timeout_sec": 0, 00:23:32.313 "psk": "key0", 00:23:32.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.313 "hdgst": false, 00:23:32.313 "ddgst": false, 00:23:32.313 "multipath": "multipath" 00:23:32.313 } 00:23:32.313 }, 00:23:32.313 { 00:23:32.313 "method": "bdev_nvme_set_hotplug", 00:23:32.313 "params": { 00:23:32.313 "period_us": 100000, 00:23:32.313 "enable": false 00:23:32.313 } 00:23:32.313 }, 00:23:32.313 { 00:23:32.313 "method": "bdev_enable_histogram", 00:23:32.313 "params": { 00:23:32.313 "name": "nvme0n1", 00:23:32.313 "enable": true 00:23:32.313 } 00:23:32.313 }, 00:23:32.313 { 00:23:32.313 "method": "bdev_wait_for_examine" 00:23:32.313 } 00:23:32.313 ] 00:23:32.313 }, 00:23:32.313 { 00:23:32.313 "subsystem": "nbd", 00:23:32.313 "config": [] 00:23:32.313 } 00:23:32.313 ] 00:23:32.313 }' 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1024486 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024486 ']' 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024486 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024486 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024486' 00:23:32.313 killing process with pid 1024486 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024486 00:23:32.313 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.313 00:23:32.313 Latency(us) 00:23:32.313 [2024-12-16T15:29:20.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.313 [2024-12-16T15:29:20.922Z] =================================================================================================================== 00:23:32.313 [2024-12-16T15:29:20.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024486 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1024404 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024404 ']' 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024404 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.313 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024404 00:23:32.573 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.573 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.573 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024404' 00:23:32.573 killing process with pid 1024404 00:23:32.573 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024404 00:23:32.573 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024404 00:23:32.573 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:32.573 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.573 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:32.573 "subsystems": [ 00:23:32.573 { 00:23:32.573 "subsystem": "keyring", 00:23:32.573 "config": [ 00:23:32.573 { 00:23:32.573 "method": "keyring_file_add_key", 00:23:32.573 "params": { 00:23:32.573 "name": "key0", 00:23:32.573 "path": "/tmp/tmp.YuToDBlboT" 00:23:32.573 } 00:23:32.573 } 00:23:32.573 ] 00:23:32.573 }, 00:23:32.573 { 00:23:32.573 "subsystem": "iobuf", 00:23:32.573 "config": [ 00:23:32.573 { 00:23:32.573 "method": "iobuf_set_options", 00:23:32.573 "params": { 00:23:32.573 "small_pool_count": 8192, 00:23:32.573 "large_pool_count": 1024, 00:23:32.573 "small_bufsize": 8192, 00:23:32.573 "large_bufsize": 135168, 00:23:32.573 "enable_numa": false 00:23:32.573 } 00:23:32.573 } 00:23:32.573 ] 00:23:32.573 }, 00:23:32.573 { 00:23:32.573 "subsystem": "sock", 00:23:32.573 "config": [ 00:23:32.573 { 00:23:32.573 "method": "sock_set_default_impl", 00:23:32.573 "params": { 00:23:32.573 "impl_name": "posix" 00:23:32.573 } 00:23:32.573 }, 00:23:32.573 { 00:23:32.573 "method": "sock_impl_set_options", 00:23:32.573 "params": { 00:23:32.573 "impl_name": "ssl", 00:23:32.573 "recv_buf_size": 4096, 00:23:32.573 "send_buf_size": 4096, 00:23:32.573 "enable_recv_pipe": true, 00:23:32.573 "enable_quickack": false, 00:23:32.573 "enable_placement_id": 0, 00:23:32.573 "enable_zerocopy_send_server": true, 00:23:32.573 "enable_zerocopy_send_client": false, 00:23:32.573 "zerocopy_threshold": 0, 00:23:32.573 "tls_version": 0, 00:23:32.573 "enable_ktls": false 00:23:32.573 } 00:23:32.573 }, 00:23:32.573 { 00:23:32.573 "method": "sock_impl_set_options", 00:23:32.573 "params": { 00:23:32.573 "impl_name": "posix", 00:23:32.573 "recv_buf_size": 2097152, 00:23:32.573 "send_buf_size": 2097152, 00:23:32.573 "enable_recv_pipe": true, 00:23:32.573 "enable_quickack": false, 00:23:32.573 "enable_placement_id": 0, 00:23:32.573 "enable_zerocopy_send_server": true, 00:23:32.573 "enable_zerocopy_send_client": false, 00:23:32.573 "zerocopy_threshold": 0, 00:23:32.573 "tls_version": 0, 00:23:32.573 "enable_ktls": false 00:23:32.573 } 00:23:32.573 } 00:23:32.573 ] 00:23:32.573 }, 00:23:32.573 { 00:23:32.573 "subsystem": "vmd", 00:23:32.574 "config": [] 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "subsystem": "accel", 00:23:32.574 "config": [ 00:23:32.574 { 00:23:32.574 "method": "accel_set_options", 00:23:32.574 "params": { 00:23:32.574 "small_cache_size": 128, 00:23:32.574 "large_cache_size": 16, 00:23:32.574 "task_count": 2048, 00:23:32.574 "sequence_count": 2048, 00:23:32.574 "buf_count": 2048 00:23:32.574 } 00:23:32.574 } 00:23:32.574 ] 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "subsystem": "bdev", 00:23:32.574 "config": [ 00:23:32.574 { 00:23:32.574 "method": "bdev_set_options", 00:23:32.574 "params": { 00:23:32.574 "bdev_io_pool_size": 65535, 00:23:32.574 "bdev_io_cache_size": 256, 00:23:32.574 "bdev_auto_examine": true, 00:23:32.574 "iobuf_small_cache_size": 128, 00:23:32.574 "iobuf_large_cache_size": 16 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "bdev_raid_set_options", 00:23:32.574 "params": { 00:23:32.574 "process_window_size_kb": 1024, 00:23:32.574 "process_max_bandwidth_mb_sec": 0 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "bdev_iscsi_set_options", 00:23:32.574 "params": { 00:23:32.574 "timeout_sec": 30 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "bdev_nvme_set_options", 00:23:32.574 "params": { 00:23:32.574 "action_on_timeout": "none", 00:23:32.574 "timeout_us": 0, 00:23:32.574 "timeout_admin_us": 0, 00:23:32.574 "keep_alive_timeout_ms": 10000, 00:23:32.574 "arbitration_burst": 0, 00:23:32.574 "low_priority_weight": 0, 00:23:32.574 "medium_priority_weight": 0, 00:23:32.574 "high_priority_weight": 0, 00:23:32.574 "nvme_adminq_poll_period_us": 10000, 00:23:32.574 "nvme_ioq_poll_period_us": 0, 00:23:32.574 "io_queue_requests": 0, 00:23:32.574 "delay_cmd_submit": true, 00:23:32.574 "transport_retry_count": 4, 00:23:32.574 "bdev_retry_count": 3, 00:23:32.574 "transport_ack_timeout": 0, 00:23:32.574 "ctrlr_loss_timeout_sec": 0, 00:23:32.574 "reconnect_delay_sec": 0, 00:23:32.574 "fast_io_fail_timeout_sec": 0, 00:23:32.574 "disable_auto_failback": false, 00:23:32.574 "generate_uuids": false, 00:23:32.574 "transport_tos": 0, 00:23:32.574 "nvme_error_stat": false, 00:23:32.574 "rdma_srq_size": 0, 00:23:32.574 "io_path_stat": false, 00:23:32.574 "allow_accel_sequence": false, 00:23:32.574 "rdma_max_cq_size": 0, 00:23:32.574 "rdma_cm_event_timeout_ms": 0, 00:23:32.574 "dhchap_digests": [ 00:23:32.574 "sha256", 00:23:32.574 "sha384", 00:23:32.574 "sha512" 00:23:32.574 ], 00:23:32.574 "dhchap_dhgroups": [ 00:23:32.574 "null", 00:23:32.574 "ffdhe2048", 00:23:32.574 "ffdhe3072", 00:23:32.574 "ffdhe4096", 00:23:32.574 "ffdhe6144", 00:23:32.574 "ffdhe8192" 00:23:32.574 ], 00:23:32.574 "rdma_umr_per_io": false 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "bdev_nvme_set_hotplug", 00:23:32.574 "params": { 00:23:32.574 "period_us": 100000, 00:23:32.574 "enable": false 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "bdev_malloc_create", 00:23:32.574 "params": { 00:23:32.574 "name": "malloc0", 00:23:32.574 "num_blocks": 8192, 00:23:32.574 "block_size": 4096, 00:23:32.574 "physical_block_size": 4096, 00:23:32.574 "uuid": "7dc560ba-dc57-4962-9acb-08b2e4a65b8a", 00:23:32.574 "optimal_io_boundary": 0, 00:23:32.574 "md_size": 0, 00:23:32.574 "dif_type": 0, 00:23:32.574 "dif_is_head_of_md": false, 00:23:32.574 "dif_pi_format": 0 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "bdev_wait_for_examine" 00:23:32.574 } 00:23:32.574 ] 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "subsystem": "nbd", 00:23:32.574 "config": [] 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "subsystem": "scheduler", 00:23:32.574 "config": [ 00:23:32.574 { 00:23:32.574 "method": "framework_set_scheduler", 00:23:32.574 "params": { 00:23:32.574 "name": "static" 00:23:32.574 } 00:23:32.574 } 00:23:32.574 ] 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "subsystem": "nvmf", 00:23:32.574 "config": [ 00:23:32.574 { 00:23:32.574 "method": "nvmf_set_config", 00:23:32.574 "params": { 00:23:32.574 "discovery_filter": "match_any", 00:23:32.574 "admin_cmd_passthru": { 00:23:32.574 "identify_ctrlr": false 00:23:32.574 }, 00:23:32.574 "dhchap_digests": [ 00:23:32.574 "sha256", 00:23:32.574 "sha384", 00:23:32.574 "sha512" 00:23:32.574 ], 00:23:32.574 "dhchap_dhgroups": [ 00:23:32.574 "null", 00:23:32.574 "ffdhe2048", 00:23:32.574 "ffdhe3072", 00:23:32.574 "ffdhe4096", 00:23:32.574 "ffdhe6144", 00:23:32.574 "ffdhe8192" 00:23:32.574 ] 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_set_max_subsystems", 00:23:32.574 "params": { 00:23:32.574 "max_subsystems": 1024 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_set_crdt", 00:23:32.574 "params": { 00:23:32.574 "crdt1": 0, 00:23:32.574 "crdt2": 0, 00:23:32.574 "crdt3": 0 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_create_transport", 00:23:32.574 "params": { 00:23:32.574 "trtype": "TCP", 00:23:32.574 "max_queue_depth": 128, 00:23:32.574 "max_io_qpairs_per_ctrlr": 127, 00:23:32.574 "in_capsule_data_size": 4096, 00:23:32.574 "max_io_size": 131072, 00:23:32.574 "io_unit_size": 131072, 00:23:32.574 "max_aq_depth": 128, 00:23:32.574 "num_shared_buffers": 511, 00:23:32.574 "buf_cache_size": 4294967295, 00:23:32.574 "dif_insert_or_strip": false, 00:23:32.574 "zcopy": false, 00:23:32.574 "c2h_success": false, 00:23:32.574 "sock_priority": 0, 00:23:32.574 "abort_timeout_sec": 1, 00:23:32.574 "ack_timeout": 0, 00:23:32.574 "data_wr_pool_size": 0 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_create_subsystem", 00:23:32.574 "params": { 00:23:32.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.574 "allow_any_host": false, 00:23:32.574 "serial_number": "00000000000000000000", 00:23:32.574 "model_number": "SPDK bdev Controller", 00:23:32.574 "max_namespaces": 32, 00:23:32.574 "min_cntlid": 1, 00:23:32.574 "max_cntlid": 65519, 00:23:32.574 "ana_reporting": false 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_subsystem_add_host", 00:23:32.574 "params": { 00:23:32.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.574 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.574 "psk": "key0" 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_subsystem_add_ns", 00:23:32.574 "params": { 00:23:32.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.574 "namespace": { 00:23:32.574 "nsid": 1, 00:23:32.574 "bdev_name": "malloc0", 00:23:32.574 "nguid": "7DC560BADC5749629ACB08B2E4A65B8A", 00:23:32.574 "uuid": "7dc560ba-dc57-4962-9acb-08b2e4a65b8a", 00:23:32.574 "no_auto_visible": false 00:23:32.574 } 00:23:32.574 } 00:23:32.574 }, 00:23:32.574 { 00:23:32.574 "method": "nvmf_subsystem_add_listener", 00:23:32.574 "params": { 00:23:32.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.574 "listen_address": { 00:23:32.574 "trtype": "TCP", 00:23:32.574 "adrfam": "IPv4", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "trsvcid": "4420" 00:23:32.575 }, 00:23:32.575 "secure_channel": false, 00:23:32.575 "sock_impl": "ssl" 00:23:32.575 } 00:23:32.575 } 00:23:32.575 ] 00:23:32.575 } 00:23:32.575 ] 00:23:32.575 }' 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024948 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024948 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024948 ']' 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.575 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.575 [2024-12-16 16:29:21.145208] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:32.575 [2024-12-16 16:29:21.145258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.834 [2024-12-16 16:29:21.224609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.834 [2024-12-16 16:29:21.244789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.834 [2024-12-16 16:29:21.244826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.834 [2024-12-16 16:29:21.244833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.834 [2024-12-16 16:29:21.244839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.834 [2024-12-16 16:29:21.244843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.834 [2024-12-16 16:29:21.245384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.094 [2024-12-16 16:29:21.453707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.094 [2024-12-16 16:29:21.485743] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.094 [2024-12-16 16:29:21.485936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.662 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.663 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.663 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.663 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.663 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1024982 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1024982 /var/tmp/bdevperf.sock 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024982 ']' 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.663 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:33.663 "subsystems": [ 00:23:33.663 { 00:23:33.663 "subsystem": "keyring", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "keyring_file_add_key", 00:23:33.663 "params": { 00:23:33.663 "name": "key0", 00:23:33.663 "path": "/tmp/tmp.YuToDBlboT" 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "iobuf", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "iobuf_set_options", 00:23:33.663 "params": { 00:23:33.663 "small_pool_count": 8192, 00:23:33.663 "large_pool_count": 1024, 00:23:33.663 "small_bufsize": 8192, 00:23:33.663 "large_bufsize": 135168, 00:23:33.663 "enable_numa": false 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "sock", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "sock_set_default_impl", 00:23:33.663 "params": { 00:23:33.663 "impl_name": "posix" 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "sock_impl_set_options", 00:23:33.663 "params": { 00:23:33.663 "impl_name": "ssl", 00:23:33.663 "recv_buf_size": 4096, 00:23:33.663 "send_buf_size": 4096, 00:23:33.663 "enable_recv_pipe": true, 00:23:33.663 "enable_quickack": false, 00:23:33.663 "enable_placement_id": 0, 00:23:33.663 "enable_zerocopy_send_server": true, 00:23:33.663 "enable_zerocopy_send_client": false, 00:23:33.663 "zerocopy_threshold": 0, 00:23:33.663 "tls_version": 0, 00:23:33.663 "enable_ktls": false 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "sock_impl_set_options", 00:23:33.663 "params": { 00:23:33.663 "impl_name": "posix", 00:23:33.663 "recv_buf_size": 2097152, 00:23:33.663 "send_buf_size": 2097152, 00:23:33.663 "enable_recv_pipe": true, 00:23:33.663 "enable_quickack": false, 00:23:33.663 "enable_placement_id": 0, 00:23:33.663 "enable_zerocopy_send_server": true, 00:23:33.663 "enable_zerocopy_send_client": false, 00:23:33.663 "zerocopy_threshold": 0, 00:23:33.663 "tls_version": 0, 00:23:33.663 "enable_ktls": false 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "vmd", 00:23:33.663 "config": [] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "accel", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "accel_set_options", 00:23:33.663 "params": { 00:23:33.663 "small_cache_size": 128, 00:23:33.663 "large_cache_size": 16, 00:23:33.663 "task_count": 2048, 00:23:33.663 "sequence_count": 2048, 00:23:33.663 "buf_count": 2048 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "bdev", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "bdev_set_options", 00:23:33.663 "params": { 00:23:33.663 "bdev_io_pool_size": 65535, 00:23:33.663 "bdev_io_cache_size": 256, 00:23:33.663 "bdev_auto_examine": true, 00:23:33.663 "iobuf_small_cache_size": 128, 00:23:33.663 "iobuf_large_cache_size": 16 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "bdev_raid_set_options", 00:23:33.663 "params": { 00:23:33.663 "process_window_size_kb": 1024, 00:23:33.663 "process_max_bandwidth_mb_sec": 0 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "bdev_iscsi_set_options", 00:23:33.663 "params": { 00:23:33.663 "timeout_sec": 30 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "bdev_nvme_set_options", 00:23:33.663 "params": { 00:23:33.663 "action_on_timeout": "none", 00:23:33.663 "timeout_us": 0, 00:23:33.663 "timeout_admin_us": 0, 00:23:33.663 "keep_alive_timeout_ms": 10000, 00:23:33.663 "arbitration_burst": 0, 00:23:33.663 "low_priority_weight": 0, 00:23:33.663 "medium_priority_weight": 0, 00:23:33.663 "high_priority_weight": 0, 00:23:33.663 "nvme_adminq_poll_period_us": 10000, 00:23:33.663 "nvme_ioq_poll_period_us": 0, 00:23:33.663 "io_queue_requests": 512, 00:23:33.663 "delay_cmd_submit": true, 00:23:33.663 "transport_retry_count": 4, 00:23:33.663 "bdev_retry_count": 3, 00:23:33.663 "transport_ack_timeout": 0, 00:23:33.663 "ctrlr_loss_timeout_sec": 0, 00:23:33.663 "reconnect_delay_sec": 0, 00:23:33.663 "fast_io_fail_timeout_sec": 0, 00:23:33.663 "disable_auto_failback": false, 00:23:33.663 "generate_uuids": false, 00:23:33.663 "transport_tos": 0, 00:23:33.663 "nvme_error_stat": false, 00:23:33.663 "rdma_srq_size": 0, 00:23:33.663 "io_path_stat": false, 00:23:33.663 "allow_accel_sequence": false, 00:23:33.663 "rdma_max_cq_size": 0, 00:23:33.663 "rdma_cm_event_timeout_ms": 0, 00:23:33.663 "dhchap_digests": [ 00:23:33.663 "sha256", 00:23:33.663 "sha384", 00:23:33.664 "sha512" 00:23:33.664 ], 00:23:33.664 "dhchap_dhgroups": [ 00:23:33.664 "null", 00:23:33.664 "ffdhe2048", 00:23:33.664 "ffdhe3072", 00:23:33.664 "ffdhe4096", 00:23:33.664 "ffdhe6144", 00:23:33.664 "ffdhe8192" 00:23:33.664 ], 00:23:33.664 "rdma_umr_per_io": false 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_nvme_attach_controller", 00:23:33.664 "params": { 00:23:33.664 "name": "nvme0", 00:23:33.664 "trtype": "TCP", 00:23:33.664 "adrfam": "IPv4", 00:23:33.664 "traddr": "10.0.0.2", 00:23:33.664 "trsvcid": "4420", 00:23:33.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.664 "prchk_reftag": false, 00:23:33.664 "prchk_guard": false, 00:23:33.664 "ctrlr_loss_timeout_sec": 0, 00:23:33.664 "reconnect_delay_sec": 0, 00:23:33.664 "fast_io_fail_timeout_sec": 0, 00:23:33.664 "psk": "key0", 00:23:33.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.664 "hdgst": false, 00:23:33.664 "ddgst": false, 00:23:33.664 "multipath": "multipath" 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_nvme_set_hotplug", 00:23:33.664 "params": { 00:23:33.664 "period_us": 100000, 00:23:33.664 "enable": false 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_enable_histogram", 00:23:33.664 "params": { 00:23:33.664 "name": "nvme0n1", 00:23:33.664 "enable": true 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_wait_for_examine" 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "subsystem": "nbd", 00:23:33.664 "config": [] 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 }' 00:23:33.664 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.664 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 [2024-12-16 16:29:22.054341] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:33.664 [2024-12-16 16:29:22.054394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024982 ] 00:23:33.664 [2024-12-16 16:29:22.131917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.664 [2024-12-16 16:29:22.153791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.923 [2024-12-16 16:29:22.301900] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.492 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.492 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.492 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.492 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:34.751 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.751 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.751 Running I/O for 1 seconds... 00:23:35.689 5444.00 IOPS, 21.27 MiB/s 00:23:35.689 Latency(us) 00:23:35.689 [2024-12-16T15:29:24.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.689 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.689 Verification LBA range: start 0x0 length 0x2000 00:23:35.689 nvme0n1 : 1.01 5501.76 21.49 0.00 0.00 23112.44 4868.39 21595.67 00:23:35.689 [2024-12-16T15:29:24.298Z] =================================================================================================================== 00:23:35.689 [2024-12-16T15:29:24.298Z] Total : 5501.76 21.49 0.00 0.00 23112.44 4868.39 21595.67 00:23:35.689 { 00:23:35.689 "results": [ 00:23:35.689 { 00:23:35.689 "job": "nvme0n1", 00:23:35.689 "core_mask": "0x2", 00:23:35.689 "workload": "verify", 00:23:35.689 "status": "finished", 00:23:35.689 "verify_range": { 00:23:35.689 "start": 0, 00:23:35.689 "length": 8192 00:23:35.689 }, 00:23:35.689 "queue_depth": 128, 00:23:35.689 "io_size": 4096, 00:23:35.689 "runtime": 1.012766, 00:23:35.689 "iops": 5501.764474715778, 00:23:35.689 "mibps": 21.49126747935851, 00:23:35.689 "io_failed": 0, 00:23:35.689 "io_timeout": 0, 00:23:35.689 "avg_latency_us": 23112.43715721465, 00:23:35.689 "min_latency_us": 4868.388571428572, 00:23:35.689 "max_latency_us": 21595.67238095238 00:23:35.689 } 00:23:35.689 ], 00:23:35.689 "core_count": 1 00:23:35.689 } 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:35.689 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:35.689 nvmf_trace.0 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1024982 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024982 ']' 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024982 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024982 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024982' 00:23:35.949 killing process with pid 1024982 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024982 00:23:35.949 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.949 00:23:35.949 Latency(us) 00:23:35.949 [2024-12-16T15:29:24.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.949 [2024-12-16T15:29:24.558Z] =================================================================================================================== 00:23:35.949 [2024-12-16T15:29:24.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024982 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.949 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.949 rmmod nvme_tcp 00:23:35.949 rmmod nvme_fabrics 00:23:36.209 rmmod nvme_keyring 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1024948 ']' 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1024948 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024948 ']' 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024948 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024948 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024948' 00:23:36.209 killing process with pid 1024948 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024948 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024948 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.209 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.468 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.468 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.468 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.468 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.468 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kJ7TIQJX5s /tmp/tmp.sxyMsMqjrv /tmp/tmp.YuToDBlboT 00:23:38.376 00:23:38.376 real 1m18.905s 00:23:38.376 user 2m0.652s 00:23:38.376 sys 0m30.605s 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.376 ************************************ 00:23:38.376 END TEST nvmf_tls 00:23:38.376 ************************************ 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:38.376 ************************************ 00:23:38.376 START TEST nvmf_fips 00:23:38.376 ************************************ 00:23:38.376 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.637 * Looking for test storage... 00:23:38.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.637 --rc genhtml_branch_coverage=1 00:23:38.637 --rc genhtml_function_coverage=1 00:23:38.637 --rc genhtml_legend=1 00:23:38.637 --rc geninfo_all_blocks=1 00:23:38.637 --rc geninfo_unexecuted_blocks=1 00:23:38.637 00:23:38.637 ' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.637 --rc genhtml_branch_coverage=1 00:23:38.637 --rc genhtml_function_coverage=1 00:23:38.637 --rc genhtml_legend=1 00:23:38.637 --rc geninfo_all_blocks=1 00:23:38.637 --rc geninfo_unexecuted_blocks=1 00:23:38.637 00:23:38.637 ' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.637 --rc genhtml_branch_coverage=1 00:23:38.637 --rc genhtml_function_coverage=1 00:23:38.637 --rc genhtml_legend=1 00:23:38.637 --rc geninfo_all_blocks=1 00:23:38.637 --rc geninfo_unexecuted_blocks=1 00:23:38.637 00:23:38.637 ' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.637 --rc genhtml_branch_coverage=1 00:23:38.637 --rc genhtml_function_coverage=1 00:23:38.637 --rc genhtml_legend=1 00:23:38.637 --rc geninfo_all_blocks=1 00:23:38.637 --rc geninfo_unexecuted_blocks=1 00:23:38.637 00:23:38.637 ' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.637 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:38.638 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:38.899 Error setting digest 00:23:38.899 4002A245237F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:38.899 4002A245237F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.899 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:45.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.475 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:45.476 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:45.476 Found net devices under 0000:af:00.0: cvl_0_0 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:45.476 Found net devices under 0000:af:00.1: cvl_0_1 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.476 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:23:45.476 00:23:45.476 --- 10.0.0.2 ping statistics --- 00:23:45.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.476 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:23:45.476 00:23:45.476 --- 10.0.0.1 ping statistics --- 00:23:45.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.476 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1028933 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1028933 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1028933 ']' 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.476 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.476 [2024-12-16 16:29:33.266110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:45.476 [2024-12-16 16:29:33.266159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.476 [2024-12-16 16:29:33.342577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.476 [2024-12-16 16:29:33.363287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.476 [2024-12-16 16:29:33.363323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.476 [2024-12-16 16:29:33.363330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.476 [2024-12-16 16:29:33.363336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.476 [2024-12-16 16:29:33.363340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.476 [2024-12-16 16:29:33.363834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.TB7 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.TB7 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.TB7 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.TB7 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.477 [2024-12-16 16:29:33.691033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.477 [2024-12-16 16:29:33.707035] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.477 [2024-12-16 16:29:33.707240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.477 malloc0 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1029149 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1029149 /var/tmp/bdevperf.sock 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029149 ']' 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.477 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.477 [2024-12-16 16:29:33.838878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:45.477 [2024-12-16 16:29:33.838930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029149 ] 00:23:45.477 [2024-12-16 16:29:33.912898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.477 [2024-12-16 16:29:33.935972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.477 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.477 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:45.477 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.TB7 00:23:45.736 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.995 [2024-12-16 16:29:34.396049] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.995 TLSTESTn1 00:23:45.995 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.995 Running I/O for 10 seconds... 00:23:48.312 5447.00 IOPS, 21.28 MiB/s [2024-12-16T15:29:37.859Z] 5535.00 IOPS, 21.62 MiB/s [2024-12-16T15:29:38.797Z] 5470.00 IOPS, 21.37 MiB/s [2024-12-16T15:29:39.734Z] 5528.25 IOPS, 21.59 MiB/s [2024-12-16T15:29:40.673Z] 5521.20 IOPS, 21.57 MiB/s [2024-12-16T15:29:41.609Z] 5501.33 IOPS, 21.49 MiB/s [2024-12-16T15:29:42.988Z] 5524.14 IOPS, 21.58 MiB/s [2024-12-16T15:29:43.925Z] 5543.62 IOPS, 21.65 MiB/s [2024-12-16T15:29:44.862Z] 5548.78 IOPS, 21.67 MiB/s [2024-12-16T15:29:44.862Z] 5556.20 IOPS, 21.70 MiB/s 00:23:56.254 Latency(us) 00:23:56.254 [2024-12-16T15:29:44.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.254 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:56.254 Verification LBA range: start 0x0 length 0x2000 00:23:56.254 TLSTESTn1 : 10.02 5558.05 21.71 0.00 0.00 22991.62 6459.98 26089.57 00:23:56.254 [2024-12-16T15:29:44.863Z] =================================================================================================================== 00:23:56.254 [2024-12-16T15:29:44.863Z] Total : 5558.05 21.71 0.00 0.00 22991.62 6459.98 26089.57 00:23:56.254 { 00:23:56.254 "results": [ 00:23:56.254 { 00:23:56.254 "job": "TLSTESTn1", 00:23:56.254 "core_mask": "0x4", 00:23:56.254 "workload": "verify", 00:23:56.254 "status": "finished", 00:23:56.254 "verify_range": { 00:23:56.254 "start": 0, 00:23:56.254 "length": 8192 00:23:56.254 }, 00:23:56.254 "queue_depth": 128, 00:23:56.254 "io_size": 4096, 00:23:56.254 "runtime": 10.019158, 00:23:56.254 "iops": 5558.051884200249, 00:23:56.254 "mibps": 21.711140172657224, 00:23:56.254 "io_failed": 0, 00:23:56.254 "io_timeout": 0, 00:23:56.254 "avg_latency_us": 22991.621118787236, 00:23:56.254 "min_latency_us": 6459.977142857143, 00:23:56.254 "max_latency_us": 26089.569523809525 00:23:56.254 } 00:23:56.254 ], 00:23:56.254 "core_count": 1 00:23:56.254 } 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:56.254 nvmf_trace.0 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1029149 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029149 ']' 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029149 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029149 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029149' 00:23:56.254 killing process with pid 1029149 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029149 00:23:56.254 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.254 00:23:56.254 Latency(us) 00:23:56.254 [2024-12-16T15:29:44.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.254 [2024-12-16T15:29:44.863Z] =================================================================================================================== 00:23:56.254 [2024-12-16T15:29:44.863Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.254 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029149 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.514 rmmod nvme_tcp 00:23:56.514 rmmod nvme_fabrics 00:23:56.514 rmmod nvme_keyring 00:23:56.514 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1028933 ']' 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1028933 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1028933 ']' 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1028933 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1028933 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1028933' 00:23:56.514 killing process with pid 1028933 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1028933 00:23:56.514 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1028933 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.773 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.TB7 00:23:59.312 00:23:59.312 real 0m20.338s 00:23:59.312 user 0m21.294s 00:23:59.312 sys 0m9.483s 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.312 ************************************ 00:23:59.312 END TEST nvmf_fips 00:23:59.312 ************************************ 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.312 ************************************ 00:23:59.312 START TEST nvmf_control_msg_list 00:23:59.312 ************************************ 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:59.312 * Looking for test storage... 00:23:59.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.312 --rc genhtml_branch_coverage=1 00:23:59.312 --rc genhtml_function_coverage=1 00:23:59.312 --rc genhtml_legend=1 00:23:59.312 --rc geninfo_all_blocks=1 00:23:59.312 --rc geninfo_unexecuted_blocks=1 00:23:59.312 00:23:59.312 ' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.312 --rc genhtml_branch_coverage=1 00:23:59.312 --rc genhtml_function_coverage=1 00:23:59.312 --rc genhtml_legend=1 00:23:59.312 --rc geninfo_all_blocks=1 00:23:59.312 --rc geninfo_unexecuted_blocks=1 00:23:59.312 00:23:59.312 ' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.312 --rc genhtml_branch_coverage=1 00:23:59.312 --rc genhtml_function_coverage=1 00:23:59.312 --rc genhtml_legend=1 00:23:59.312 --rc geninfo_all_blocks=1 00:23:59.312 --rc geninfo_unexecuted_blocks=1 00:23:59.312 00:23:59.312 ' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.312 --rc genhtml_branch_coverage=1 00:23:59.312 --rc genhtml_function_coverage=1 00:23:59.312 --rc genhtml_legend=1 00:23:59.312 --rc geninfo_all_blocks=1 00:23:59.312 --rc geninfo_unexecuted_blocks=1 00:23:59.312 00:23:59.312 ' 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:59.312 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.313 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:04.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:04.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:04.592 Found net devices under 0000:af:00.0: cvl_0_0 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.592 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:04.593 Found net devices under 0000:af:00.1: cvl_0_1 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.593 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:24:04.851 00:24:04.851 --- 10.0.0.2 ping statistics --- 00:24:04.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.851 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:04.851 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:24:04.851 00:24:04.852 --- 10.0.0.1 ping statistics --- 00:24:04.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.852 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.852 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1034347 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1034347 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1034347 ']' 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.111 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.111 [2024-12-16 16:29:53.521310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:05.111 [2024-12-16 16:29:53.521357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.111 [2024-12-16 16:29:53.600407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.111 [2024-12-16 16:29:53.621685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.111 [2024-12-16 16:29:53.621717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.111 [2024-12-16 16:29:53.621724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.112 [2024-12-16 16:29:53.621730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.112 [2024-12-16 16:29:53.621735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.112 [2024-12-16 16:29:53.622181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.371 [2024-12-16 16:29:53.764525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.371 Malloc0 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.371 [2024-12-16 16:29:53.812652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1034442 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1034443 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1034444 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:05.371 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1034442 00:24:05.371 [2024-12-16 16:29:53.897091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:05.371 [2024-12-16 16:29:53.907144] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:05.371 [2024-12-16 16:29:53.907278] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:06.747 Initializing NVMe Controllers 00:24:06.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:06.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:06.748 Initialization complete. Launching workers. 00:24:06.748 ======================================================== 00:24:06.748 Latency(us) 00:24:06.748 Device Information : IOPS MiB/s Average min max 00:24:06.748 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2515.00 9.82 397.10 120.58 41025.99 00:24:06.748 ======================================================== 00:24:06.748 Total : 2515.00 9.82 397.10 120.58 41025.99 00:24:06.748 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1034443 00:24:06.748 Initializing NVMe Controllers 00:24:06.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:06.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:06.748 Initialization complete. Launching workers. 00:24:06.748 ======================================================== 00:24:06.748 Latency(us) 00:24:06.748 Device Information : IOPS MiB/s Average min max 00:24:06.748 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4520.00 17.66 224.43 121.62 41938.67 00:24:06.748 ======================================================== 00:24:06.748 Total : 4520.00 17.66 224.43 121.62 41938.67 00:24:06.748 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1034444 00:24:06.748 Initializing NVMe Controllers 00:24:06.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:06.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:06.748 Initialization complete. Launching workers. 00:24:06.748 ======================================================== 00:24:06.748 Latency(us) 00:24:06.748 Device Information : IOPS MiB/s Average min max 00:24:06.748 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41016.57 40783.44 41963.12 00:24:06.748 ======================================================== 00:24:06.748 Total : 25.00 0.10 41016.57 40783.44 41963.12 00:24:06.748 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.748 rmmod nvme_tcp 00:24:06.748 rmmod nvme_fabrics 00:24:06.748 rmmod nvme_keyring 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1034347 ']' 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1034347 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1034347 ']' 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1034347 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1034347 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1034347' 00:24:06.748 killing process with pid 1034347 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1034347 00:24:06.748 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1034347 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.007 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.003 00:24:09.003 real 0m10.148s 00:24:09.003 user 0m6.911s 00:24:09.003 sys 0m5.426s 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 ************************************ 00:24:09.003 END TEST nvmf_control_msg_list 00:24:09.003 ************************************ 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 ************************************ 00:24:09.003 START TEST nvmf_wait_for_buf 00:24:09.003 ************************************ 00:24:09.003 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:09.262 * Looking for test storage... 00:24:09.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.262 --rc genhtml_branch_coverage=1 00:24:09.262 --rc genhtml_function_coverage=1 00:24:09.262 --rc genhtml_legend=1 00:24:09.262 --rc geninfo_all_blocks=1 00:24:09.262 --rc geninfo_unexecuted_blocks=1 00:24:09.262 00:24:09.262 ' 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.262 --rc genhtml_branch_coverage=1 00:24:09.262 --rc genhtml_function_coverage=1 00:24:09.262 --rc genhtml_legend=1 00:24:09.262 --rc geninfo_all_blocks=1 00:24:09.262 --rc geninfo_unexecuted_blocks=1 00:24:09.262 00:24:09.262 ' 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.262 --rc genhtml_branch_coverage=1 00:24:09.262 --rc genhtml_function_coverage=1 00:24:09.262 --rc genhtml_legend=1 00:24:09.262 --rc geninfo_all_blocks=1 00:24:09.262 --rc geninfo_unexecuted_blocks=1 00:24:09.262 00:24:09.262 ' 00:24:09.262 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.262 --rc genhtml_branch_coverage=1 00:24:09.263 --rc genhtml_function_coverage=1 00:24:09.263 --rc genhtml_legend=1 00:24:09.263 --rc geninfo_all_blocks=1 00:24:09.263 --rc geninfo_unexecuted_blocks=1 00:24:09.263 00:24:09.263 ' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.263 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:15.829 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:15.829 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.829 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:15.829 Found net devices under 0000:af:00.0: cvl_0_0 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:15.830 Found net devices under 0000:af:00.1: cvl_0_1 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:24:15.830 00:24:15.830 --- 10.0.0.2 ping statistics --- 00:24:15.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.830 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:15.830 00:24:15.830 --- 10.0.0.1 ping statistics --- 00:24:15.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.830 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1038259 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1038259 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1038259 ']' 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.830 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.830 [2024-12-16 16:30:03.861831] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:15.830 [2024-12-16 16:30:03.861872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.830 [2024-12-16 16:30:03.940577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.830 [2024-12-16 16:30:03.962052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.830 [2024-12-16 16:30:03.962084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.830 [2024-12-16 16:30:03.962092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.830 [2024-12-16 16:30:03.962103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.830 [2024-12-16 16:30:03.962125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.830 [2024-12-16 16:30:03.962614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:15.830 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.831 Malloc0 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.831 [2024-12-16 16:30:04.159846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.831 [2024-12-16 16:30:04.188025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.831 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.831 [2024-12-16 16:30:04.273172] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:17.207 Initializing NVMe Controllers 00:24:17.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:17.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:17.207 Initialization complete. Launching workers. 00:24:17.207 ======================================================== 00:24:17.207 Latency(us) 00:24:17.207 Device Information : IOPS MiB/s Average min max 00:24:17.207 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32239.57 7289.11 63848.81 00:24:17.207 ======================================================== 00:24:17.207 Total : 129.00 16.12 32239.57 7289.11 63848.81 00:24:17.207 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.207 rmmod nvme_tcp 00:24:17.207 rmmod nvme_fabrics 00:24:17.207 rmmod nvme_keyring 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1038259 ']' 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1038259 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1038259 ']' 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1038259 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038259 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038259' 00:24:17.207 killing process with pid 1038259 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1038259 00:24:17.207 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1038259 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.467 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.373 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.373 00:24:19.373 real 0m10.387s 00:24:19.373 user 0m3.917s 00:24:19.373 sys 0m4.832s 00:24:19.373 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.373 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.373 ************************************ 00:24:19.373 END TEST nvmf_wait_for_buf 00:24:19.373 ************************************ 00:24:19.632 16:30:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:19.632 16:30:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.632 16:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:19.632 16:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:19.633 ************************************ 00:24:19.633 START TEST nvmf_fuzz 00:24:19.633 ************************************ 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.633 * Looking for test storage... 00:24:19.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:19.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.633 --rc genhtml_branch_coverage=1 00:24:19.633 --rc genhtml_function_coverage=1 00:24:19.633 --rc genhtml_legend=1 00:24:19.633 --rc geninfo_all_blocks=1 00:24:19.633 --rc geninfo_unexecuted_blocks=1 00:24:19.633 00:24:19.633 ' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:19.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.633 --rc genhtml_branch_coverage=1 00:24:19.633 --rc genhtml_function_coverage=1 00:24:19.633 --rc genhtml_legend=1 00:24:19.633 --rc geninfo_all_blocks=1 00:24:19.633 --rc geninfo_unexecuted_blocks=1 00:24:19.633 00:24:19.633 ' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:19.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.633 --rc genhtml_branch_coverage=1 00:24:19.633 --rc genhtml_function_coverage=1 00:24:19.633 --rc genhtml_legend=1 00:24:19.633 --rc geninfo_all_blocks=1 00:24:19.633 --rc geninfo_unexecuted_blocks=1 00:24:19.633 00:24:19.633 ' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:19.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.633 --rc genhtml_branch_coverage=1 00:24:19.633 --rc genhtml_function_coverage=1 00:24:19.633 --rc genhtml_legend=1 00:24:19.633 --rc geninfo_all_blocks=1 00:24:19.633 --rc geninfo_unexecuted_blocks=1 00:24:19.633 00:24:19.633 ' 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.633 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.892 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:19.893 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:19.893 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.893 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:26.468 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:26.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:26.468 Found net devices under 0000:af:00.0: cvl_0_0 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:26.468 Found net devices under 0000:af:00.1: cvl_0_1 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.468 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.469 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:24:26.469 00:24:26.469 --- 10.0.0.2 ping statistics --- 00:24:26.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.469 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:26.469 00:24:26.469 --- 10.0.0.1 ping statistics --- 00:24:26.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.469 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1042360 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1042360 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1042360 ']' 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.469 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.470 Malloc0 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:26.470 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:58.548 Fuzzing completed. Shutting down the fuzz application 00:24:58.548 00:24:58.548 Dumping successful admin opcodes: 00:24:58.548 9, 10, 00:24:58.548 Dumping successful io opcodes: 00:24:58.548 0, 9, 00:24:58.548 NS: 0x2000008eff00 I/O qp, Total commands completed: 1015154, total successful commands: 5944, random_seed: 705473216 00:24:58.548 NS: 0x2000008eff00 admin qp, Total commands completed: 132016, total successful commands: 29, random_seed: 2328705344 00:24:58.548 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:58.548 Fuzzing completed. Shutting down the fuzz application 00:24:58.548 00:24:58.548 Dumping successful admin opcodes: 00:24:58.548 00:24:58.548 Dumping successful io opcodes: 00:24:58.548 00:24:58.548 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2037349278 00:24:58.549 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 2037410122 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.549 rmmod nvme_tcp 00:24:58.549 rmmod nvme_fabrics 00:24:58.549 rmmod nvme_keyring 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1042360 ']' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1042360 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1042360 ']' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1042360 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042360 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042360' 00:24:58.549 killing process with pid 1042360 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1042360 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1042360 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.549 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.927 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.927 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:59.927 00:24:59.927 real 0m40.467s 00:24:59.927 user 0m54.003s 00:24:59.927 sys 0m15.759s 00:24:59.927 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.927 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.927 ************************************ 00:24:59.927 END TEST nvmf_fuzz 00:24:59.927 ************************************ 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.186 ************************************ 00:25:00.186 START TEST nvmf_multiconnection 00:25:00.186 ************************************ 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:00.186 * Looking for test storage... 00:25:00.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.186 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:00.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.187 --rc genhtml_branch_coverage=1 00:25:00.187 --rc genhtml_function_coverage=1 00:25:00.187 --rc genhtml_legend=1 00:25:00.187 --rc geninfo_all_blocks=1 00:25:00.187 --rc geninfo_unexecuted_blocks=1 00:25:00.187 00:25:00.187 ' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:00.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.187 --rc genhtml_branch_coverage=1 00:25:00.187 --rc genhtml_function_coverage=1 00:25:00.187 --rc genhtml_legend=1 00:25:00.187 --rc geninfo_all_blocks=1 00:25:00.187 --rc geninfo_unexecuted_blocks=1 00:25:00.187 00:25:00.187 ' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:00.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.187 --rc genhtml_branch_coverage=1 00:25:00.187 --rc genhtml_function_coverage=1 00:25:00.187 --rc genhtml_legend=1 00:25:00.187 --rc geninfo_all_blocks=1 00:25:00.187 --rc geninfo_unexecuted_blocks=1 00:25:00.187 00:25:00.187 ' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:00.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.187 --rc genhtml_branch_coverage=1 00:25:00.187 --rc genhtml_function_coverage=1 00:25:00.187 --rc genhtml_legend=1 00:25:00.187 --rc geninfo_all_blocks=1 00:25:00.187 --rc geninfo_unexecuted_blocks=1 00:25:00.187 00:25:00.187 ' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.187 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.446 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:07.017 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:07.017 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:07.017 Found net devices under 0000:af:00.0: cvl_0_0 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:07.017 Found net devices under 0000:af:00.1: cvl_0_1 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:25:07.017 00:25:07.017 --- 10.0.0.2 ping statistics --- 00:25:07.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.017 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:25:07.017 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:25:07.017 00:25:07.017 --- 10.0.0.1 ping statistics --- 00:25:07.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.018 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1050926 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1050926 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1050926 ']' 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.018 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 [2024-12-16 16:30:54.877012] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:07.018 [2024-12-16 16:30:54.877059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.018 [2024-12-16 16:30:54.956583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.018 [2024-12-16 16:30:54.980816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.018 [2024-12-16 16:30:54.980856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.018 [2024-12-16 16:30:54.980863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.018 [2024-12-16 16:30:54.980870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.018 [2024-12-16 16:30:54.980875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.018 [2024-12-16 16:30:54.982383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.018 [2024-12-16 16:30:54.982493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.018 [2024-12-16 16:30:54.982579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.018 [2024-12-16 16:30:54.982580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 [2024-12-16 16:30:55.126836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 Malloc1 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 [2024-12-16 16:30:55.197678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 Malloc2 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 Malloc3 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:07.018 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 Malloc4 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 Malloc5 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 Malloc6 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 Malloc7 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 Malloc8 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 Malloc9 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.019 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.020 Malloc10 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.020 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.279 Malloc11 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.279 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:08.654 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:08.654 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:08.654 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.654 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:08.654 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.555 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:11.489 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:11.489 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:11.489 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:11.489 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:11.489 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:14.019 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.020 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:14.954 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:14.954 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:14.954 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.954 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:14.954 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.921 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:18.293 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:18.293 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:18.293 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.293 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:18.293 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:20.195 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:20.195 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:20.195 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:20.196 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:20.196 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.196 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:20.196 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.196 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:21.572 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:21.572 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:21.572 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.572 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:21.572 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.475 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:24.852 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:24.852 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:24.852 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.852 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:24.852 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.755 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:28.131 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:28.131 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.131 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.131 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.131 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.033 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:31.409 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:31.409 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.409 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.409 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.409 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.310 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.311 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:34.685 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:34.685 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:34.685 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.685 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:34.685 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.588 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:37.962 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:37.962 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:37.962 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.962 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:37.962 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.864 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:41.768 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:41.768 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.768 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.768 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.768 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.672 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:43.672 [global] 00:25:43.673 thread=1 00:25:43.673 invalidate=1 00:25:43.673 rw=read 00:25:43.673 time_based=1 00:25:43.673 runtime=10 00:25:43.673 ioengine=libaio 00:25:43.673 direct=1 00:25:43.673 bs=262144 00:25:43.673 iodepth=64 00:25:43.673 norandommap=1 00:25:43.673 numjobs=1 00:25:43.673 00:25:43.673 [job0] 00:25:43.673 filename=/dev/nvme0n1 00:25:43.673 [job1] 00:25:43.673 filename=/dev/nvme10n1 00:25:43.673 [job2] 00:25:43.673 filename=/dev/nvme1n1 00:25:43.673 [job3] 00:25:43.673 filename=/dev/nvme2n1 00:25:43.673 [job4] 00:25:43.673 filename=/dev/nvme3n1 00:25:43.673 [job5] 00:25:43.673 filename=/dev/nvme4n1 00:25:43.673 [job6] 00:25:43.673 filename=/dev/nvme5n1 00:25:43.673 [job7] 00:25:43.673 filename=/dev/nvme6n1 00:25:43.673 [job8] 00:25:43.673 filename=/dev/nvme7n1 00:25:43.673 [job9] 00:25:43.673 filename=/dev/nvme8n1 00:25:43.673 [job10] 00:25:43.673 filename=/dev/nvme9n1 00:25:43.673 Could not set queue depth (nvme0n1) 00:25:43.673 Could not set queue depth (nvme10n1) 00:25:43.673 Could not set queue depth (nvme1n1) 00:25:43.673 Could not set queue depth (nvme2n1) 00:25:43.673 Could not set queue depth (nvme3n1) 00:25:43.673 Could not set queue depth (nvme4n1) 00:25:43.673 Could not set queue depth (nvme5n1) 00:25:43.673 Could not set queue depth (nvme6n1) 00:25:43.673 Could not set queue depth (nvme7n1) 00:25:43.673 Could not set queue depth (nvme8n1) 00:25:43.673 Could not set queue depth (nvme9n1) 00:25:43.932 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.932 fio-3.35 00:25:43.932 Starting 11 threads 00:25:56.145 00:25:56.145 job0: (groupid=0, jobs=1): err= 0: pid=1057320: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=244, BW=61.0MiB/s (64.0MB/s)(614MiB/10057msec) 00:25:56.146 slat (usec): min=12, max=218025, avg=2559.52, stdev=12937.85 00:25:56.146 clat (msec): min=15, max=866, avg=259.34, stdev=176.61 00:25:56.146 lat (msec): min=15, max=866, avg=261.90, stdev=178.67 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 60], 20.00th=[ 94], 00:25:56.146 | 30.00th=[ 146], 40.00th=[ 192], 50.00th=[ 224], 60.00th=[ 255], 00:25:56.146 | 70.00th=[ 317], 80.00th=[ 443], 90.00th=[ 514], 95.00th=[ 575], 00:25:56.146 | 99.00th=[ 751], 99.50th=[ 802], 99.90th=[ 869], 99.95th=[ 869], 00:25:56.146 | 99.99th=[ 869] 00:25:56.146 bw ( KiB/s): min=27136, max=196608, per=7.36%, avg=61235.20, stdev=40032.07, samples=20 00:25:56.146 iops : min= 106, max= 768, avg=239.20, stdev=156.38, samples=20 00:25:56.146 lat (msec) : 20=0.45%, 50=6.52%, 100=14.91%, 250=36.99%, 500=29.65% 00:25:56.146 lat (msec) : 750=10.51%, 1000=0.98% 00:25:56.146 cpu : usr=0.07%, sys=0.96%, ctx=470, majf=0, minf=3722 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=2455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job1: (groupid=0, jobs=1): err= 0: pid=1057344: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=343, BW=85.9MiB/s (90.1MB/s)(870MiB/10133msec) 00:25:56.146 slat (usec): min=15, max=159948, avg=2860.00, stdev=12329.71 00:25:56.146 clat (msec): min=15, max=660, avg=183.23, stdev=134.60 00:25:56.146 lat (msec): min=16, max=670, avg=186.09, stdev=136.56 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 37], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 72], 00:25:56.146 | 30.00th=[ 91], 40.00th=[ 115], 50.00th=[ 138], 60.00th=[ 163], 00:25:56.146 | 70.00th=[ 211], 80.00th=[ 300], 90.00th=[ 388], 95.00th=[ 460], 00:25:56.146 | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 659], 00:25:56.146 | 99.99th=[ 659] 00:25:56.146 bw ( KiB/s): min=27648, max=258560, per=10.52%, avg=87475.20, stdev=63388.45, samples=20 00:25:56.146 iops : min= 108, max= 1010, avg=341.70, stdev=247.61, samples=20 00:25:56.146 lat (msec) : 20=0.09%, 50=4.63%, 100=29.36%, 250=40.91%, 500=21.43% 00:25:56.146 lat (msec) : 750=3.59% 00:25:56.146 cpu : usr=0.20%, sys=1.34%, ctx=477, majf=0, minf=4097 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=3481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job2: (groupid=0, jobs=1): err= 0: pid=1057386: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=493, BW=123MiB/s (129MB/s)(1251MiB/10132msec) 00:25:56.146 slat (usec): min=10, max=434387, avg=1296.52, stdev=10228.67 00:25:56.146 clat (usec): min=1212, max=853697, avg=128130.89, stdev=159777.67 00:25:56.146 lat (usec): min=1255, max=1288.1k, avg=129427.41, stdev=161152.42 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 32], 00:25:56.146 | 30.00th=[ 35], 40.00th=[ 39], 50.00th=[ 44], 60.00th=[ 58], 00:25:56.146 | 70.00th=[ 93], 80.00th=[ 255], 90.00th=[ 414], 95.00th=[ 485], 00:25:56.146 | 99.00th=[ 609], 99.50th=[ 642], 99.90th=[ 701], 99.95th=[ 852], 00:25:56.146 | 99.99th=[ 852] 00:25:56.146 bw ( KiB/s): min=31232, max=449024, per=15.21%, avg=126464.00, stdev=124539.64, samples=20 00:25:56.146 iops : min= 122, max= 1754, avg=494.00, stdev=486.48, samples=20 00:25:56.146 lat (msec) : 2=0.18%, 4=0.56%, 10=1.52%, 20=1.42%, 50=51.48% 00:25:56.146 lat (msec) : 100=16.11%, 250=8.27%, 500=16.05%, 750=4.32%, 1000=0.10% 00:25:56.146 cpu : usr=0.18%, sys=1.83%, ctx=975, majf=0, minf=4097 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job3: (groupid=0, jobs=1): err= 0: pid=1057409: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=211, BW=52.8MiB/s (55.4MB/s)(537MiB/10165msec) 00:25:56.146 slat (usec): min=14, max=297894, avg=2276.01, stdev=14860.24 00:25:56.146 clat (usec): min=1594, max=988649, avg=300377.80, stdev=197696.95 00:25:56.146 lat (usec): min=1654, max=988676, avg=302653.82, stdev=199329.49 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 51], 20.00th=[ 102], 00:25:56.146 | 30.00th=[ 161], 40.00th=[ 213], 50.00th=[ 284], 60.00th=[ 351], 00:25:56.146 | 70.00th=[ 414], 80.00th=[ 493], 90.00th=[ 584], 95.00th=[ 642], 00:25:56.146 | 99.00th=[ 743], 99.50th=[ 751], 99.90th=[ 810], 99.95th=[ 810], 00:25:56.146 | 99.99th=[ 986] 00:25:56.146 bw ( KiB/s): min=17408, max=108544, per=6.41%, avg=53327.65, stdev=23180.55, samples=20 00:25:56.146 iops : min= 68, max= 424, avg=208.30, stdev=90.56, samples=20 00:25:56.146 lat (msec) : 2=0.09%, 4=0.88%, 10=4.33%, 50=4.61%, 100=10.01% 00:25:56.146 lat (msec) : 250=25.80%, 500=34.98%, 750=18.72%, 1000=0.56% 00:25:56.146 cpu : usr=0.05%, sys=0.85%, ctx=604, majf=0, minf=4097 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job4: (groupid=0, jobs=1): err= 0: pid=1057424: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=288, BW=72.0MiB/s (75.5MB/s)(730MiB/10129msec) 00:25:56.146 slat (usec): min=9, max=356026, avg=2493.78, stdev=14387.94 00:25:56.146 clat (msec): min=2, max=880, avg=219.41, stdev=210.62 00:25:56.146 lat (msec): min=2, max=1044, avg=221.90, stdev=212.99 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 27], 00:25:56.146 | 30.00th=[ 31], 40.00th=[ 91], 50.00th=[ 163], 60.00th=[ 232], 00:25:56.146 | 70.00th=[ 288], 80.00th=[ 384], 90.00th=[ 550], 95.00th=[ 667], 00:25:56.146 | 99.00th=[ 776], 99.50th=[ 844], 99.90th=[ 877], 99.95th=[ 877], 00:25:56.146 | 99.99th=[ 877] 00:25:56.146 bw ( KiB/s): min=22528, max=443392, per=8.79%, avg=73095.90, stdev=90998.80, samples=20 00:25:56.146 iops : min= 88, max= 1732, avg=285.50, stdev=355.46, samples=20 00:25:56.146 lat (msec) : 4=0.27%, 10=0.10%, 20=1.47%, 50=31.05%, 100=9.12% 00:25:56.146 lat (msec) : 250=22.41%, 500=22.45%, 750=11.38%, 1000=1.75% 00:25:56.146 cpu : usr=0.15%, sys=1.06%, ctx=475, majf=0, minf=4097 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=2918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job5: (groupid=0, jobs=1): err= 0: pid=1057459: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=255, BW=64.0MiB/s (67.1MB/s)(648MiB/10132msec) 00:25:56.146 slat (usec): min=16, max=203902, avg=2492.80, stdev=13279.88 00:25:56.146 clat (usec): min=1027, max=919583, avg=247389.73, stdev=191837.57 00:25:56.146 lat (usec): min=1058, max=919601, avg=249882.53, stdev=193919.37 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 24], 20.00th=[ 50], 00:25:56.146 | 30.00th=[ 83], 40.00th=[ 188], 50.00th=[ 226], 60.00th=[ 275], 00:25:56.146 | 70.00th=[ 347], 80.00th=[ 426], 90.00th=[ 514], 95.00th=[ 592], 00:25:56.146 | 99.00th=[ 760], 99.50th=[ 802], 99.90th=[ 844], 99.95th=[ 844], 00:25:56.146 | 99.99th=[ 919] 00:25:56.146 bw ( KiB/s): min=22528, max=314880, per=7.78%, avg=64720.95, stdev=65154.48, samples=20 00:25:56.146 iops : min= 88, max= 1230, avg=252.80, stdev=254.52, samples=20 00:25:56.146 lat (msec) : 2=0.27%, 4=0.35%, 10=4.67%, 20=2.51%, 50=12.92% 00:25:56.146 lat (msec) : 100=12.04%, 250=22.65%, 500=33.95%, 750=9.61%, 1000=1.04% 00:25:56.146 cpu : usr=0.02%, sys=1.03%, ctx=910, majf=0, minf=4097 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job6: (groupid=0, jobs=1): err= 0: pid=1057462: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=392, BW=98.0MiB/s (103MB/s)(993MiB/10128msec) 00:25:56.146 slat (usec): min=15, max=197038, avg=2239.29, stdev=10263.36 00:25:56.146 clat (usec): min=1773, max=590412, avg=160770.29, stdev=136747.16 00:25:56.146 lat (msec): min=2, max=590, avg=163.01, stdev=138.64 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 26], 00:25:56.146 | 30.00th=[ 62], 40.00th=[ 102], 50.00th=[ 123], 60.00th=[ 150], 00:25:56.146 | 70.00th=[ 220], 80.00th=[ 284], 90.00th=[ 372], 95.00th=[ 439], 00:25:56.146 | 99.00th=[ 523], 99.50th=[ 558], 99.90th=[ 592], 99.95th=[ 592], 00:25:56.146 | 99.99th=[ 592] 00:25:56.146 bw ( KiB/s): min=30720, max=381952, per=12.03%, avg=100044.80, stdev=84801.92, samples=20 00:25:56.146 iops : min= 120, max= 1492, avg=390.80, stdev=331.26, samples=20 00:25:56.146 lat (msec) : 2=0.03%, 4=0.08%, 10=0.93%, 20=14.43%, 50=13.02% 00:25:56.146 lat (msec) : 100=10.15%, 250=35.50%, 500=24.14%, 750=1.74% 00:25:56.146 cpu : usr=0.21%, sys=1.70%, ctx=771, majf=0, minf=4097 00:25:56.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.146 issued rwts: total=3972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.146 job7: (groupid=0, jobs=1): err= 0: pid=1057463: Mon Dec 16 16:31:43 2024 00:25:56.146 read: IOPS=219, BW=54.9MiB/s (57.5MB/s)(556MiB/10126msec) 00:25:56.146 slat (usec): min=14, max=498091, avg=2724.37, stdev=16953.03 00:25:56.146 clat (usec): min=1036, max=879356, avg=288574.06, stdev=201961.35 00:25:56.146 lat (usec): min=1776, max=879400, avg=291298.43, stdev=204046.04 00:25:56.146 clat percentiles (msec): 00:25:56.146 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 39], 20.00th=[ 92], 00:25:56.146 | 30.00th=[ 126], 40.00th=[ 203], 50.00th=[ 284], 60.00th=[ 351], 00:25:56.146 | 70.00th=[ 405], 80.00th=[ 464], 90.00th=[ 550], 95.00th=[ 659], 00:25:56.147 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 877], 99.95th=[ 877], 00:25:56.147 | 99.99th=[ 877] 00:25:56.147 bw ( KiB/s): min=14848, max=125440, per=6.65%, avg=55270.40, stdev=33029.47, samples=20 00:25:56.147 iops : min= 58, max= 490, avg=215.90, stdev=129.02, samples=20 00:25:56.147 lat (msec) : 2=0.18%, 4=0.50%, 10=2.57%, 20=2.30%, 50=6.66% 00:25:56.147 lat (msec) : 100=10.31%, 250=24.39%, 500=38.88%, 750=11.88%, 1000=2.34% 00:25:56.147 cpu : usr=0.02%, sys=0.94%, ctx=567, majf=0, minf=4097 00:25:56.147 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:56.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.147 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.147 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.147 job8: (groupid=0, jobs=1): err= 0: pid=1057465: Mon Dec 16 16:31:43 2024 00:25:56.147 read: IOPS=224, BW=56.2MiB/s (58.9MB/s)(569MiB/10129msec) 00:25:56.147 slat (usec): min=20, max=280370, avg=2498.60, stdev=13742.56 00:25:56.147 clat (usec): min=1391, max=806118, avg=281980.84, stdev=210504.58 00:25:56.147 lat (usec): min=1438, max=813652, avg=284479.44, stdev=212794.25 00:25:56.147 clat percentiles (msec): 00:25:56.147 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 22], 20.00th=[ 41], 00:25:56.147 | 30.00th=[ 99], 40.00th=[ 186], 50.00th=[ 296], 60.00th=[ 363], 00:25:56.147 | 70.00th=[ 401], 80.00th=[ 468], 90.00th=[ 558], 95.00th=[ 667], 00:25:56.147 | 99.00th=[ 751], 99.50th=[ 768], 99.90th=[ 810], 99.95th=[ 810], 00:25:56.147 | 99.99th=[ 810] 00:25:56.147 bw ( KiB/s): min=23040, max=243712, per=6.81%, avg=56661.20, stdev=49378.24, samples=20 00:25:56.147 iops : min= 90, max= 952, avg=221.30, stdev=192.86, samples=20 00:25:56.147 lat (msec) : 2=0.09%, 4=1.63%, 10=0.22%, 20=6.06%, 50=16.70% 00:25:56.147 lat (msec) : 100=5.54%, 250=15.03%, 500=40.03%, 750=13.62%, 1000=1.10% 00:25:56.147 cpu : usr=0.14%, sys=0.87%, ctx=740, majf=0, minf=4097 00:25:56.147 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:56.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.147 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.147 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.147 job9: (groupid=0, jobs=1): err= 0: pid=1057466: Mon Dec 16 16:31:43 2024 00:25:56.147 read: IOPS=285, BW=71.3MiB/s (74.8MB/s)(717MiB/10054msec) 00:25:56.147 slat (usec): min=16, max=211304, avg=1813.49, stdev=12587.95 00:25:56.147 clat (usec): min=552, max=862986, avg=222235.37, stdev=202549.84 00:25:56.147 lat (usec): min=595, max=863016, avg=224048.85, stdev=204383.65 00:25:56.147 clat percentiles (usec): 00:25:56.147 | 1.00th=[ 1123], 5.00th=[ 4555], 10.00th=[ 16057], 20.00th=[ 27132], 00:25:56.147 | 30.00th=[ 65274], 40.00th=[ 95945], 50.00th=[193987], 60.00th=[240124], 00:25:56.147 | 70.00th=[295699], 80.00th=[387974], 90.00th=[541066], 95.00th=[641729], 00:25:56.147 | 99.00th=[742392], 99.50th=[750781], 99.90th=[843056], 99.95th=[851444], 00:25:56.147 | 99.99th=[859833] 00:25:56.147 bw ( KiB/s): min=23552, max=164864, per=8.63%, avg=71808.00, stdev=46342.70, samples=20 00:25:56.147 iops : min= 92, max= 644, avg=280.50, stdev=181.03, samples=20 00:25:56.147 lat (usec) : 750=0.52%, 1000=0.35% 00:25:56.147 lat (msec) : 2=0.84%, 4=3.17%, 10=2.37%, 20=7.01%, 50=12.79% 00:25:56.147 lat (msec) : 100=13.77%, 250=20.29%, 500=26.07%, 750=12.13%, 1000=0.70% 00:25:56.147 cpu : usr=0.10%, sys=1.08%, ctx=959, majf=0, minf=4097 00:25:56.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:25:56.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.147 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.147 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.147 job10: (groupid=0, jobs=1): err= 0: pid=1057467: Mon Dec 16 16:31:43 2024 00:25:56.147 read: IOPS=304, BW=76.2MiB/s (79.9MB/s)(772MiB/10128msec) 00:25:56.147 slat (usec): min=14, max=363593, avg=1501.29, stdev=10633.79 00:25:56.147 clat (usec): min=941, max=835858, avg=208274.02, stdev=177193.02 00:25:56.147 lat (usec): min=959, max=835894, avg=209775.31, stdev=177844.61 00:25:56.147 clat percentiles (usec): 00:25:56.147 | 1.00th=[ 1565], 5.00th=[ 8160], 10.00th=[ 23200], 20.00th=[ 76022], 00:25:56.147 | 30.00th=[ 88605], 40.00th=[107480], 50.00th=[147850], 60.00th=[212861], 00:25:56.147 | 70.00th=[274727], 80.00th=[329253], 90.00th=[438305], 95.00th=[624952], 00:25:56.147 | 99.00th=[734004], 99.50th=[759170], 99.90th=[784335], 99.95th=[826278], 00:25:56.147 | 99.99th=[834667] 00:25:56.147 bw ( KiB/s): min=17920, max=195072, per=9.31%, avg=77392.05, stdev=49933.77, samples=20 00:25:56.147 iops : min= 70, max= 762, avg=302.30, stdev=195.07, samples=20 00:25:56.147 lat (usec) : 1000=0.03% 00:25:56.147 lat (msec) : 2=1.62%, 4=0.68%, 10=3.27%, 20=3.24%, 50=5.09% 00:25:56.147 lat (msec) : 100=22.03%, 250=30.19%, 500=25.40%, 750=7.81%, 1000=0.65% 00:25:56.147 cpu : usr=0.11%, sys=1.26%, ctx=925, majf=0, minf=4097 00:25:56.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:56.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.147 issued rwts: total=3087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.147 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.147 00:25:56.147 Run status group 0 (all jobs): 00:25:56.147 READ: bw=812MiB/s (852MB/s), 52.8MiB/s-123MiB/s (55.4MB/s-129MB/s), io=8256MiB (8657MB), run=10054-10165msec 00:25:56.147 00:25:56.147 Disk stats (read/write): 00:25:56.147 nvme0n1: ios=4717/0, merge=0/0, ticks=1232437/0, in_queue=1232437, util=94.87% 00:25:56.147 nvme10n1: ios=6798/0, merge=0/0, ticks=1220867/0, in_queue=1220867, util=95.31% 00:25:56.147 nvme1n1: ios=9864/0, merge=0/0, ticks=1214973/0, in_queue=1214973, util=95.95% 00:25:56.147 nvme2n1: ios=4154/0, merge=0/0, ticks=1205277/0, in_queue=1205277, util=96.32% 00:25:56.147 nvme3n1: ios=5701/0, merge=0/0, ticks=1200554/0, in_queue=1200554, util=96.56% 00:25:56.147 nvme4n1: ios=5039/0, merge=0/0, ticks=1231754/0, in_queue=1231754, util=97.37% 00:25:56.147 nvme5n1: ios=7816/0, merge=0/0, ticks=1219241/0, in_queue=1219241, util=97.72% 00:25:56.147 nvme6n1: ios=4287/0, merge=0/0, ticks=1197358/0, in_queue=1197358, util=98.03% 00:25:56.147 nvme7n1: ios=4293/0, merge=0/0, ticks=1238921/0, in_queue=1238921, util=98.92% 00:25:56.147 nvme8n1: ios=5571/0, merge=0/0, ticks=1240794/0, in_queue=1240794, util=99.13% 00:25:56.147 nvme9n1: ios=6011/0, merge=0/0, ticks=1230869/0, in_queue=1230869, util=99.21% 00:25:56.147 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:56.147 [global] 00:25:56.147 thread=1 00:25:56.147 invalidate=1 00:25:56.147 rw=randwrite 00:25:56.147 time_based=1 00:25:56.147 runtime=10 00:25:56.147 ioengine=libaio 00:25:56.147 direct=1 00:25:56.147 bs=262144 00:25:56.147 iodepth=64 00:25:56.147 norandommap=1 00:25:56.147 numjobs=1 00:25:56.147 00:25:56.147 [job0] 00:25:56.147 filename=/dev/nvme0n1 00:25:56.147 [job1] 00:25:56.147 filename=/dev/nvme10n1 00:25:56.147 [job2] 00:25:56.147 filename=/dev/nvme1n1 00:25:56.147 [job3] 00:25:56.147 filename=/dev/nvme2n1 00:25:56.147 [job4] 00:25:56.147 filename=/dev/nvme3n1 00:25:56.147 [job5] 00:25:56.147 filename=/dev/nvme4n1 00:25:56.147 [job6] 00:25:56.147 filename=/dev/nvme5n1 00:25:56.147 [job7] 00:25:56.147 filename=/dev/nvme6n1 00:25:56.147 [job8] 00:25:56.147 filename=/dev/nvme7n1 00:25:56.147 [job9] 00:25:56.147 filename=/dev/nvme8n1 00:25:56.147 [job10] 00:25:56.147 filename=/dev/nvme9n1 00:25:56.147 Could not set queue depth (nvme0n1) 00:25:56.147 Could not set queue depth (nvme10n1) 00:25:56.147 Could not set queue depth (nvme1n1) 00:25:56.147 Could not set queue depth (nvme2n1) 00:25:56.147 Could not set queue depth (nvme3n1) 00:25:56.147 Could not set queue depth (nvme4n1) 00:25:56.147 Could not set queue depth (nvme5n1) 00:25:56.147 Could not set queue depth (nvme6n1) 00:25:56.147 Could not set queue depth (nvme7n1) 00:25:56.147 Could not set queue depth (nvme8n1) 00:25:56.147 Could not set queue depth (nvme9n1) 00:25:56.147 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.147 fio-3.35 00:25:56.147 Starting 11 threads 00:26:06.262 00:26:06.262 job0: (groupid=0, jobs=1): err= 0: pid=1058510: Mon Dec 16 16:31:54 2024 00:26:06.262 write: IOPS=317, BW=79.4MiB/s (83.2MB/s)(810MiB/10206msec); 0 zone resets 00:26:06.262 slat (usec): min=34, max=146947, avg=2139.88, stdev=6474.65 00:26:06.262 clat (usec): min=1309, max=620494, avg=199289.13, stdev=111749.72 00:26:06.262 lat (usec): min=1900, max=620541, avg=201429.00, stdev=113103.64 00:26:06.262 clat percentiles (msec): 00:26:06.262 | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 75], 20.00th=[ 114], 00:26:06.262 | 30.00th=[ 142], 40.00th=[ 171], 50.00th=[ 182], 60.00th=[ 188], 00:26:06.262 | 70.00th=[ 215], 80.00th=[ 275], 90.00th=[ 347], 95.00th=[ 447], 00:26:06.262 | 99.00th=[ 558], 99.50th=[ 584], 99.90th=[ 617], 99.95th=[ 617], 00:26:06.262 | 99.99th=[ 617] 00:26:06.262 bw ( KiB/s): min=32768, max=120832, per=7.13%, avg=81356.80, stdev=26294.39, samples=20 00:26:06.262 iops : min= 128, max= 472, avg=317.80, stdev=102.71, samples=20 00:26:06.262 lat (msec) : 2=0.06%, 4=0.19%, 10=0.09%, 20=0.28%, 50=4.54% 00:26:06.262 lat (msec) : 100=8.58%, 250=62.42%, 500=21.29%, 750=2.56% 00:26:06.262 cpu : usr=0.79%, sys=1.25%, ctx=1750, majf=0, minf=1 00:26:06.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.262 issued rwts: total=0,3241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.262 job1: (groupid=0, jobs=1): err= 0: pid=1058522: Mon Dec 16 16:31:54 2024 00:26:06.262 write: IOPS=422, BW=106MiB/s (111MB/s)(1078MiB/10208msec); 0 zone resets 00:26:06.262 slat (usec): min=28, max=195781, avg=1787.70, stdev=5715.72 00:26:06.262 clat (usec): min=898, max=638084, avg=149620.41, stdev=131006.77 00:26:06.263 lat (usec): min=952, max=638153, avg=151408.11, stdev=132156.41 00:26:06.263 clat percentiles (msec): 00:26:06.263 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 18], 20.00th=[ 56], 00:26:06.263 | 30.00th=[ 64], 40.00th=[ 81], 50.00th=[ 113], 60.00th=[ 155], 00:26:06.263 | 70.00th=[ 184], 80.00th=[ 213], 90.00th=[ 338], 95.00th=[ 443], 00:26:06.263 | 99.00th=[ 584], 99.50th=[ 625], 99.90th=[ 634], 99.95th=[ 634], 00:26:06.263 | 99.99th=[ 642] 00:26:06.263 bw ( KiB/s): min=27136, max=291840, per=9.53%, avg=108723.20, stdev=72460.74, samples=20 00:26:06.263 iops : min= 106, max= 1140, avg=424.70, stdev=283.05, samples=20 00:26:06.263 lat (usec) : 1000=0.05% 00:26:06.263 lat (msec) : 2=0.32%, 4=3.09%, 10=3.67%, 20=3.69%, 50=6.40% 00:26:06.263 lat (msec) : 100=29.23%, 250=37.30%, 500=12.87%, 750=3.39% 00:26:06.263 cpu : usr=1.08%, sys=1.74%, ctx=2004, majf=0, minf=1 00:26:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.263 issued rwts: total=0,4311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.263 job2: (groupid=0, jobs=1): err= 0: pid=1058523: Mon Dec 16 16:31:54 2024 00:26:06.263 write: IOPS=440, BW=110MiB/s (115MB/s)(1115MiB/10121msec); 0 zone resets 00:26:06.263 slat (usec): min=22, max=129340, avg=1592.12, stdev=4379.09 00:26:06.263 clat (usec): min=773, max=539449, avg=143618.90, stdev=84780.30 00:26:06.263 lat (usec): min=824, max=539525, avg=145211.02, stdev=85351.25 00:26:06.263 clat percentiles (msec): 00:26:06.263 | 1.00th=[ 3], 5.00th=[ 29], 10.00th=[ 49], 20.00th=[ 91], 00:26:06.263 | 30.00th=[ 108], 40.00th=[ 116], 50.00th=[ 122], 60.00th=[ 133], 00:26:06.263 | 70.00th=[ 167], 80.00th=[ 201], 90.00th=[ 253], 95.00th=[ 313], 00:26:06.263 | 99.00th=[ 435], 99.50th=[ 464], 99.90th=[ 531], 99.95th=[ 535], 00:26:06.263 | 99.99th=[ 542] 00:26:06.263 bw ( KiB/s): min=50688, max=230400, per=9.86%, avg=112512.00, stdev=42194.08, samples=20 00:26:06.263 iops : min= 198, max= 900, avg=439.50, stdev=164.82, samples=20 00:26:06.263 lat (usec) : 1000=0.22% 00:26:06.263 lat (msec) : 2=0.61%, 4=0.63%, 10=1.14%, 20=1.08%, 50=8.61% 00:26:06.263 lat (msec) : 100=11.80%, 250=65.66%, 500=9.91%, 750=0.34% 00:26:06.263 cpu : usr=1.29%, sys=1.54%, ctx=2158, majf=0, minf=1 00:26:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.263 issued rwts: total=0,4458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.263 job3: (groupid=0, jobs=1): err= 0: pid=1058524: Mon Dec 16 16:31:54 2024 00:26:06.263 write: IOPS=399, BW=99.8MiB/s (105MB/s)(1004MiB/10065msec); 0 zone resets 00:26:06.263 slat (usec): min=18, max=34385, avg=2077.20, stdev=5191.51 00:26:06.263 clat (usec): min=789, max=621785, avg=158268.98, stdev=111791.27 00:26:06.263 lat (usec): min=828, max=621832, avg=160346.18, stdev=113026.89 00:26:06.263 clat percentiles (usec): 00:26:06.263 | 1.00th=[ 1516], 5.00th=[ 7046], 10.00th=[ 17957], 20.00th=[ 48497], 00:26:06.263 | 30.00th=[ 80217], 40.00th=[113771], 50.00th=[143655], 60.00th=[181404], 00:26:06.263 | 70.00th=[214959], 80.00th=[258999], 90.00th=[304088], 95.00th=[358613], 00:26:06.263 | 99.00th=[434111], 99.50th=[497026], 99.90th=[591397], 99.95th=[608175], 00:26:06.263 | 99.99th=[624952] 00:26:06.263 bw ( KiB/s): min=45056, max=259584, per=8.87%, avg=101222.05, stdev=62254.49, samples=20 00:26:06.263 iops : min= 176, max= 1014, avg=395.35, stdev=243.06, samples=20 00:26:06.263 lat (usec) : 1000=0.20% 00:26:06.263 lat (msec) : 2=2.54%, 4=0.90%, 10=3.06%, 20=4.76%, 50=9.09% 00:26:06.263 lat (msec) : 100=15.26%, 250=40.24%, 500=23.51%, 750=0.45% 00:26:06.263 cpu : usr=1.07%, sys=1.25%, ctx=1840, majf=0, minf=1 00:26:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.263 issued rwts: total=0,4016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.263 job4: (groupid=0, jobs=1): err= 0: pid=1058525: Mon Dec 16 16:31:54 2024 00:26:06.263 write: IOPS=352, BW=88.2MiB/s (92.5MB/s)(900MiB/10209msec); 0 zone resets 00:26:06.263 slat (usec): min=21, max=210115, avg=2018.76, stdev=6152.25 00:26:06.263 clat (usec): min=944, max=634411, avg=179323.61, stdev=109585.30 00:26:06.263 lat (usec): min=1019, max=641915, avg=181342.37, stdev=110463.04 00:26:06.263 clat percentiles (msec): 00:26:06.263 | 1.00th=[ 32], 5.00th=[ 80], 10.00th=[ 91], 20.00th=[ 111], 00:26:06.263 | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 150], 00:26:06.263 | 70.00th=[ 192], 80.00th=[ 255], 90.00th=[ 355], 95.00th=[ 409], 00:26:06.263 | 99.00th=[ 542], 99.50th=[ 592], 99.90th=[ 625], 99.95th=[ 625], 00:26:06.263 | 99.99th=[ 634] 00:26:06.263 bw ( KiB/s): min=33792, max=153600, per=7.93%, avg=90547.20, stdev=40744.62, samples=20 00:26:06.263 iops : min= 132, max= 600, avg=353.70, stdev=159.16, samples=20 00:26:06.263 lat (usec) : 1000=0.08% 00:26:06.263 lat (msec) : 2=0.14%, 10=0.08%, 20=0.28%, 50=1.25%, 100=12.50% 00:26:06.263 lat (msec) : 250=65.32%, 500=18.41%, 750=1.94% 00:26:06.263 cpu : usr=0.80%, sys=1.05%, ctx=1381, majf=0, minf=1 00:26:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.263 issued rwts: total=0,3601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.263 job5: (groupid=0, jobs=1): err= 0: pid=1058526: Mon Dec 16 16:31:54 2024 00:26:06.263 write: IOPS=457, BW=114MiB/s (120MB/s)(1151MiB/10066msec); 0 zone resets 00:26:06.263 slat (usec): min=32, max=49823, avg=1489.49, stdev=4542.73 00:26:06.263 clat (msec): min=3, max=464, avg=138.09, stdev=117.60 00:26:06.263 lat (msec): min=3, max=469, avg=139.58, stdev=118.77 00:26:06.263 clat percentiles (msec): 00:26:06.263 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 42], 00:26:06.263 | 30.00th=[ 43], 40.00th=[ 45], 50.00th=[ 65], 60.00th=[ 146], 00:26:06.263 | 70.00th=[ 203], 80.00th=[ 271], 90.00th=[ 313], 95.00th=[ 363], 00:26:06.263 | 99.00th=[ 414], 99.50th=[ 439], 99.90th=[ 460], 99.95th=[ 464], 00:26:06.263 | 99.99th=[ 464] 00:26:06.263 bw ( KiB/s): min=47616, max=374784, per=10.19%, avg=116249.60, stdev=99562.12, samples=20 00:26:06.263 iops : min= 186, max= 1464, avg=454.10, stdev=388.91, samples=20 00:26:06.263 lat (msec) : 4=0.02%, 10=0.48%, 20=1.43%, 50=44.42%, 100=9.90% 00:26:06.263 lat (msec) : 250=18.03%, 500=25.72% 00:26:06.263 cpu : usr=0.98%, sys=1.65%, ctx=2008, majf=0, minf=1 00:26:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.263 issued rwts: total=0,4604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.263 job6: (groupid=0, jobs=1): err= 0: pid=1058527: Mon Dec 16 16:31:54 2024 00:26:06.263 write: IOPS=339, BW=85.0MiB/s (89.1MB/s)(860MiB/10119msec); 0 zone resets 00:26:06.263 slat (usec): min=21, max=65459, avg=2289.00, stdev=6372.96 00:26:06.263 clat (msec): min=3, max=586, avg=185.96, stdev=141.58 00:26:06.263 lat (msec): min=3, max=586, avg=188.24, stdev=143.27 00:26:06.263 clat percentiles (msec): 00:26:06.263 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 38], 20.00th=[ 46], 00:26:06.263 | 30.00th=[ 59], 40.00th=[ 122], 50.00th=[ 153], 60.00th=[ 213], 00:26:06.263 | 70.00th=[ 264], 80.00th=[ 296], 90.00th=[ 409], 95.00th=[ 464], 00:26:06.263 | 99.00th=[ 531], 99.50th=[ 542], 99.90th=[ 575], 99.95th=[ 584], 00:26:06.263 | 99.99th=[ 584] 00:26:06.263 bw ( KiB/s): min=30208, max=282624, per=7.57%, avg=86425.60, stdev=64775.72, samples=20 00:26:06.263 iops : min= 118, max= 1104, avg=337.60, stdev=253.03, samples=20 00:26:06.263 lat (msec) : 4=0.06%, 10=0.61%, 20=2.79%, 50=24.22%, 100=7.82% 00:26:06.263 lat (msec) : 250=31.11%, 500=30.33%, 750=3.05% 00:26:06.263 cpu : usr=0.73%, sys=1.07%, ctx=1617, majf=0, minf=1 00:26:06.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.263 issued rwts: total=0,3439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.263 job7: (groupid=0, jobs=1): err= 0: pid=1058528: Mon Dec 16 16:31:54 2024 00:26:06.263 write: IOPS=353, BW=88.3MiB/s (92.6MB/s)(902MiB/10210msec); 0 zone resets 00:26:06.263 slat (usec): min=21, max=162304, avg=2215.29, stdev=6012.04 00:26:06.263 clat (usec): min=962, max=562856, avg=178884.73, stdev=110570.60 00:26:06.263 lat (usec): min=1010, max=562893, avg=181100.02, stdev=111734.05 00:26:06.263 clat percentiles (msec): 00:26:06.263 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 84], 00:26:06.263 | 30.00th=[ 122], 40.00th=[ 155], 50.00th=[ 178], 60.00th=[ 188], 00:26:06.263 | 70.00th=[ 218], 80.00th=[ 279], 90.00th=[ 338], 95.00th=[ 376], 00:26:06.263 | 99.00th=[ 468], 99.50th=[ 498], 99.90th=[ 542], 99.95th=[ 567], 00:26:06.263 | 99.99th=[ 567] 00:26:06.263 bw ( KiB/s): min=38400, max=174592, per=7.95%, avg=90700.80, stdev=42804.86, samples=20 00:26:06.263 iops : min= 150, max= 682, avg=354.30, stdev=167.21, samples=20 00:26:06.263 lat (usec) : 1000=0.03% 00:26:06.263 lat (msec) : 2=0.31%, 4=0.67%, 10=2.58%, 20=3.69%, 50=9.04% 00:26:06.264 lat (msec) : 100=8.60%, 250=50.39%, 500=24.24%, 750=0.47% 00:26:06.264 cpu : usr=0.75%, sys=1.10%, ctx=1788, majf=0, minf=1 00:26:06.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.264 issued rwts: total=0,3606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.264 job8: (groupid=0, jobs=1): err= 0: pid=1058529: Mon Dec 16 16:31:54 2024 00:26:06.264 write: IOPS=657, BW=164MiB/s (172MB/s)(1679MiB/10210msec); 0 zone resets 00:26:06.264 slat (usec): min=20, max=229760, avg=1162.45, stdev=4932.71 00:26:06.264 clat (usec): min=1442, max=641341, avg=96097.87, stdev=103743.35 00:26:06.264 lat (usec): min=1499, max=641398, avg=97260.33, stdev=104854.81 00:26:06.264 clat percentiles (msec): 00:26:06.264 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 39], 00:26:06.264 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 62], 00:26:06.264 | 70.00th=[ 103], 80.00th=[ 128], 90.00th=[ 249], 95.00th=[ 351], 00:26:06.264 | 99.00th=[ 498], 99.50th=[ 527], 99.90th=[ 625], 99.95th=[ 625], 00:26:06.264 | 99.99th=[ 642] 00:26:06.264 bw ( KiB/s): min=31744, max=411648, per=14.92%, avg=170265.60, stdev=130618.95, samples=20 00:26:06.264 iops : min= 124, max= 1608, avg=665.10, stdev=510.23, samples=20 00:26:06.264 lat (msec) : 2=0.04%, 4=0.51%, 10=1.00%, 20=1.67%, 50=53.45% 00:26:06.264 lat (msec) : 100=12.14%, 250=21.22%, 500=9.07%, 750=0.91% 00:26:06.264 cpu : usr=1.64%, sys=1.59%, ctx=2700, majf=0, minf=2 00:26:06.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.264 issued rwts: total=0,6715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.264 job9: (groupid=0, jobs=1): err= 0: pid=1058530: Mon Dec 16 16:31:54 2024 00:26:06.264 write: IOPS=318, BW=79.7MiB/s (83.5MB/s)(806MiB/10119msec); 0 zone resets 00:26:06.264 slat (usec): min=27, max=129270, avg=2523.24, stdev=6731.11 00:26:06.264 clat (msec): min=7, max=620, avg=198.27, stdev=112.91 00:26:06.264 lat (msec): min=10, max=655, avg=200.79, stdev=114.54 00:26:06.264 clat percentiles (msec): 00:26:06.264 | 1.00th=[ 23], 5.00th=[ 50], 10.00th=[ 88], 20.00th=[ 116], 00:26:06.264 | 30.00th=[ 125], 40.00th=[ 144], 50.00th=[ 176], 60.00th=[ 199], 00:26:06.264 | 70.00th=[ 251], 80.00th=[ 279], 90.00th=[ 338], 95.00th=[ 418], 00:26:06.264 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 617], 99.95th=[ 617], 00:26:06.264 | 99.99th=[ 617] 00:26:06.264 bw ( KiB/s): min=28672, max=156160, per=7.09%, avg=80921.60, stdev=37915.47, samples=20 00:26:06.264 iops : min= 112, max= 610, avg=316.10, stdev=148.11, samples=20 00:26:06.264 lat (msec) : 10=0.03%, 20=0.62%, 50=4.40%, 100=8.44%, 250=56.48% 00:26:06.264 lat (msec) : 500=26.74%, 750=3.29% 00:26:06.264 cpu : usr=0.69%, sys=1.07%, ctx=1257, majf=0, minf=1 00:26:06.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.264 issued rwts: total=0,3224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.264 job10: (groupid=0, jobs=1): err= 0: pid=1058531: Mon Dec 16 16:31:54 2024 00:26:06.264 write: IOPS=421, BW=105MiB/s (111MB/s)(1077MiB/10211msec); 0 zone resets 00:26:06.264 slat (usec): min=24, max=52669, avg=1889.50, stdev=4653.00 00:26:06.264 clat (msec): min=6, max=546, avg=149.74, stdev=100.70 00:26:06.264 lat (msec): min=6, max=546, avg=151.63, stdev=101.88 00:26:06.264 clat percentiles (msec): 00:26:06.264 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 82], 00:26:06.264 | 30.00th=[ 101], 40.00th=[ 115], 50.00th=[ 122], 60.00th=[ 125], 00:26:06.264 | 70.00th=[ 150], 80.00th=[ 224], 90.00th=[ 300], 95.00th=[ 372], 00:26:06.264 | 99.00th=[ 485], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 527], 00:26:06.264 | 99.99th=[ 550] 00:26:06.264 bw ( KiB/s): min=38912, max=331776, per=9.52%, avg=108595.20, stdev=67473.85, samples=20 00:26:06.264 iops : min= 152, max= 1296, avg=424.20, stdev=263.57, samples=20 00:26:06.264 lat (msec) : 10=0.09%, 20=0.65%, 50=12.80%, 100=15.86%, 250=53.48% 00:26:06.264 lat (msec) : 500=16.72%, 750=0.39% 00:26:06.264 cpu : usr=0.92%, sys=1.31%, ctx=1648, majf=0, minf=1 00:26:06.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.264 issued rwts: total=0,4306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.264 00:26:06.264 Run status group 0 (all jobs): 00:26:06.264 WRITE: bw=1115MiB/s (1169MB/s), 79.4MiB/s-164MiB/s (83.2MB/s-172MB/s), io=11.1GiB (11.9GB), run=10065-10211msec 00:26:06.264 00:26:06.264 Disk stats (read/write): 00:26:06.264 nvme0n1: ios=49/6453, merge=0/0, ticks=38/1246545, in_queue=1246583, util=97.39% 00:26:06.264 nvme10n1: ios=52/8592, merge=0/0, ticks=1255/1238896, in_queue=1240151, util=100.00% 00:26:06.264 nvme1n1: ios=41/8705, merge=0/0, ticks=1103/1215391, in_queue=1216494, util=100.00% 00:26:06.264 nvme2n1: ios=0/7823, merge=0/0, ticks=0/1208812, in_queue=1208812, util=97.71% 00:26:06.264 nvme3n1: ios=0/7171, merge=0/0, ticks=0/1245470, in_queue=1245470, util=97.85% 00:26:06.264 nvme4n1: ios=51/8999, merge=0/0, ticks=866/1215224, in_queue=1216090, util=100.00% 00:26:06.264 nvme5n1: ios=0/6710, merge=0/0, ticks=0/1215414, in_queue=1215414, util=98.25% 00:26:06.264 nvme6n1: ios=0/7179, merge=0/0, ticks=0/1240210, in_queue=1240210, util=98.41% 00:26:06.264 nvme7n1: ios=0/13397, merge=0/0, ticks=0/1241832, in_queue=1241832, util=98.79% 00:26:06.264 nvme8n1: ios=0/6278, merge=0/0, ticks=0/1214867, in_queue=1214867, util=98.91% 00:26:06.264 nvme9n1: ios=40/8578, merge=0/0, ticks=818/1239262, in_queue=1240080, util=100.00% 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:06.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:06.264 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.264 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:06.832 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.832 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:07.090 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.090 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:07.349 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.349 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:07.608 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.608 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:07.867 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.867 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:08.127 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:08.127 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.127 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:08.386 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:08.386 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.386 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.386 rmmod nvme_tcp 00:26:08.386 rmmod nvme_fabrics 00:26:08.645 rmmod nvme_keyring 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1050926 ']' 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1050926 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1050926 ']' 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1050926 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050926 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050926' 00:26:08.645 killing process with pid 1050926 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1050926 00:26:08.645 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1050926 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.905 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.467 00:26:11.467 real 1m10.970s 00:26:11.467 user 4m15.495s 00:26:11.467 sys 0m18.098s 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.467 ************************************ 00:26:11.467 END TEST nvmf_multiconnection 00:26:11.467 ************************************ 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:11.467 ************************************ 00:26:11.467 START TEST nvmf_initiator_timeout 00:26:11.467 ************************************ 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.467 * Looking for test storage... 00:26:11.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:11.467 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.468 --rc genhtml_branch_coverage=1 00:26:11.468 --rc genhtml_function_coverage=1 00:26:11.468 --rc genhtml_legend=1 00:26:11.468 --rc geninfo_all_blocks=1 00:26:11.468 --rc geninfo_unexecuted_blocks=1 00:26:11.468 00:26:11.468 ' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.468 --rc genhtml_branch_coverage=1 00:26:11.468 --rc genhtml_function_coverage=1 00:26:11.468 --rc genhtml_legend=1 00:26:11.468 --rc geninfo_all_blocks=1 00:26:11.468 --rc geninfo_unexecuted_blocks=1 00:26:11.468 00:26:11.468 ' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.468 --rc genhtml_branch_coverage=1 00:26:11.468 --rc genhtml_function_coverage=1 00:26:11.468 --rc genhtml_legend=1 00:26:11.468 --rc geninfo_all_blocks=1 00:26:11.468 --rc geninfo_unexecuted_blocks=1 00:26:11.468 00:26:11.468 ' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.468 --rc genhtml_branch_coverage=1 00:26:11.468 --rc genhtml_function_coverage=1 00:26:11.468 --rc genhtml_legend=1 00:26:11.468 --rc geninfo_all_blocks=1 00:26:11.468 --rc geninfo_unexecuted_blocks=1 00:26:11.468 00:26:11.468 ' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.468 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.039 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.039 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.039 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.039 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.040 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.040 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.040 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.040 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:26:18.040 00:26:18.040 --- 10.0.0.2 ping statistics --- 00:26:18.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.040 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:26:18.040 00:26:18.040 --- 10.0.0.1 ping statistics --- 00:26:18.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.040 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:18.040 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1063712 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1063712 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1063712 ']' 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.041 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 [2024-12-16 16:32:05.892811] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:18.041 [2024-12-16 16:32:05.892861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.041 [2024-12-16 16:32:05.972270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.041 [2024-12-16 16:32:05.995499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.041 [2024-12-16 16:32:05.995538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.041 [2024-12-16 16:32:05.995545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.041 [2024-12-16 16:32:05.995551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.041 [2024-12-16 16:32:05.995556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.041 [2024-12-16 16:32:05.996897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.041 [2024-12-16 16:32:05.997004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.041 [2024-12-16 16:32:05.997131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.041 [2024-12-16 16:32:05.997132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 Malloc0 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 Delay0 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 [2024-12-16 16:32:06.192577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 [2024-12-16 16:32:06.217805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.041 16:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:18.977 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:18.977 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.977 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.977 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.977 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1064326 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:20.881 16:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:20.881 [global] 00:26:20.881 thread=1 00:26:20.881 invalidate=1 00:26:20.881 rw=write 00:26:20.881 time_based=1 00:26:20.881 runtime=60 00:26:20.881 ioengine=libaio 00:26:20.881 direct=1 00:26:20.881 bs=4096 00:26:20.881 iodepth=1 00:26:20.881 norandommap=0 00:26:20.881 numjobs=1 00:26:20.881 00:26:20.881 verify_dump=1 00:26:20.881 verify_backlog=512 00:26:20.881 verify_state_save=0 00:26:20.881 do_verify=1 00:26:20.881 verify=crc32c-intel 00:26:20.881 [job0] 00:26:20.881 filename=/dev/nvme0n1 00:26:20.881 Could not set queue depth (nvme0n1) 00:26:21.140 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:21.140 fio-3.35 00:26:21.140 Starting 1 thread 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.425 true 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.425 true 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.425 true 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.425 true 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.425 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.956 true 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.956 true 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.956 true 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.956 true 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:26.956 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1064326 00:27:23.178 00:27:23.178 job0: (groupid=0, jobs=1): err= 0: pid=1064444: Mon Dec 16 16:33:09 2024 00:27:23.178 read: IOPS=16, BW=64.0KiB/s (65.6kB/s)(3844KiB/60025msec) 00:27:23.178 slat (usec): min=6, max=11663, avg=36.75, stdev=480.64 00:27:23.179 clat (usec): min=200, max=41412k, avg=62202.49, stdev=1335420.66 00:27:23.179 lat (usec): min=207, max=41412k, avg=62239.24, stdev=1335420.45 00:27:23.179 clat percentiles (usec): 00:27:23.179 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 00:27:23.179 | 20.00th=[ 235], 30.00th=[ 241], 40.00th=[ 247], 00:27:23.179 | 50.00th=[ 260], 60.00th=[ 41157], 70.00th=[ 41157], 00:27:23.179 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:23.179 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[17112761], 00:27:23.179 | 99.95th=[17112761], 99.99th=[17112761] 00:27:23.179 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60025msec); 0 zone resets 00:27:23.179 slat (nsec): min=9785, max=38945, avg=11168.83, stdev=2005.14 00:27:23.179 clat (usec): min=150, max=2608, avg=188.49, stdev=78.79 00:27:23.179 lat (usec): min=160, max=2625, avg=199.66, stdev=79.15 00:27:23.179 clat percentiles (usec): 00:27:23.179 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:27:23.179 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:27:23.179 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:27:23.179 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 529], 99.95th=[ 2606], 00:27:23.179 | 99.99th=[ 2606] 00:27:23.179 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:27:23.179 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:23.179 lat (usec) : 250=72.24%, 500=5.19%, 750=0.05% 00:27:23.179 lat (msec) : 4=0.05%, 50=22.42%, >=2000=0.05% 00:27:23.179 cpu : usr=0.04%, sys=0.07%, ctx=1987, majf=0, minf=1 00:27:23.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.179 issued rwts: total=961,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:23.179 00:27:23.179 Run status group 0 (all jobs): 00:27:23.179 READ: bw=64.0KiB/s (65.6kB/s), 64.0KiB/s-64.0KiB/s (65.6kB/s-65.6kB/s), io=3844KiB (3936kB), run=60025-60025msec 00:27:23.179 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60025-60025msec 00:27:23.179 00:27:23.179 Disk stats (read/write): 00:27:23.179 nvme0n1: ios=1056/1024, merge=0/0, ticks=19400/177, in_queue=19577, util=99.69% 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:23.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:23.179 nvmf hotplug test: fio successful as expected 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.179 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.179 rmmod nvme_tcp 00:27:23.179 rmmod nvme_fabrics 00:27:23.179 rmmod nvme_keyring 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1063712 ']' 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1063712 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1063712 ']' 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1063712 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1063712 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1063712' 00:27:23.179 killing process with pid 1063712 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1063712 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1063712 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.179 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:24.116 00:27:24.116 real 1m12.728s 00:27:24.116 user 4m22.787s 00:27:24.116 sys 0m6.473s 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:24.116 ************************************ 00:27:24.116 END TEST nvmf_initiator_timeout 00:27:24.116 ************************************ 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.116 16:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:30.693 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:30.693 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.693 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:30.694 Found net devices under 0000:af:00.0: cvl_0_0 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:30.694 Found net devices under 0000:af:00.1: cvl_0_1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:30.694 ************************************ 00:27:30.694 START TEST nvmf_perf_adq 00:27:30.694 ************************************ 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:30.694 * Looking for test storage... 00:27:30.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:30.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.694 --rc genhtml_branch_coverage=1 00:27:30.694 --rc genhtml_function_coverage=1 00:27:30.694 --rc genhtml_legend=1 00:27:30.694 --rc geninfo_all_blocks=1 00:27:30.694 --rc geninfo_unexecuted_blocks=1 00:27:30.694 00:27:30.694 ' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:30.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.694 --rc genhtml_branch_coverage=1 00:27:30.694 --rc genhtml_function_coverage=1 00:27:30.694 --rc genhtml_legend=1 00:27:30.694 --rc geninfo_all_blocks=1 00:27:30.694 --rc geninfo_unexecuted_blocks=1 00:27:30.694 00:27:30.694 ' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:30.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.694 --rc genhtml_branch_coverage=1 00:27:30.694 --rc genhtml_function_coverage=1 00:27:30.694 --rc genhtml_legend=1 00:27:30.694 --rc geninfo_all_blocks=1 00:27:30.694 --rc geninfo_unexecuted_blocks=1 00:27:30.694 00:27:30.694 ' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:30.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.694 --rc genhtml_branch_coverage=1 00:27:30.694 --rc genhtml_function_coverage=1 00:27:30.694 --rc genhtml_legend=1 00:27:30.694 --rc geninfo_all_blocks=1 00:27:30.694 --rc geninfo_unexecuted_blocks=1 00:27:30.694 00:27:30.694 ' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.694 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.695 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.976 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:35.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:35.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:35.977 Found net devices under 0000:af:00.0: cvl_0_0 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:35.977 Found net devices under 0000:af:00.1: cvl_0_1 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:35.977 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:36.545 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:39.078 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.354 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:44.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:44.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:44.355 Found net devices under 0000:af:00.0: cvl_0_0 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:44.355 Found net devices under 0000:af:00.1: cvl_0_1 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:44.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:27:44.355 00:27:44.355 --- 10.0.0.2 ping statistics --- 00:27:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.355 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:27:44.355 00:27:44.355 --- 10.0.0.1 ping statistics --- 00:27:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.355 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1082500 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1082500 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1082500 ']' 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.355 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.355 [2024-12-16 16:33:32.809903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:44.355 [2024-12-16 16:33:32.809950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.355 [2024-12-16 16:33:32.889532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.355 [2024-12-16 16:33:32.912599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.355 [2024-12-16 16:33:32.912636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.355 [2024-12-16 16:33:32.912644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.355 [2024-12-16 16:33:32.912651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.355 [2024-12-16 16:33:32.912657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.355 [2024-12-16 16:33:32.914360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.355 [2024-12-16 16:33:32.914389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.355 [2024-12-16 16:33:32.914494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.355 [2024-12-16 16:33:32.914495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.614 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.614 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:44.614 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.615 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.615 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 [2024-12-16 16:33:33.138555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 Malloc1 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 [2024-12-16 16:33:33.197698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1082696 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:44.615 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:47.146 "tick_rate": 2100000000, 00:27:47.146 "poll_groups": [ 00:27:47.146 { 00:27:47.146 "name": "nvmf_tgt_poll_group_000", 00:27:47.146 "admin_qpairs": 1, 00:27:47.146 "io_qpairs": 1, 00:27:47.146 "current_admin_qpairs": 1, 00:27:47.146 "current_io_qpairs": 1, 00:27:47.146 "pending_bdev_io": 0, 00:27:47.146 "completed_nvme_io": 20534, 00:27:47.146 "transports": [ 00:27:47.146 { 00:27:47.146 "trtype": "TCP" 00:27:47.146 } 00:27:47.146 ] 00:27:47.146 }, 00:27:47.146 { 00:27:47.146 "name": "nvmf_tgt_poll_group_001", 00:27:47.146 "admin_qpairs": 0, 00:27:47.146 "io_qpairs": 1, 00:27:47.146 "current_admin_qpairs": 0, 00:27:47.146 "current_io_qpairs": 1, 00:27:47.146 "pending_bdev_io": 0, 00:27:47.146 "completed_nvme_io": 20647, 00:27:47.146 "transports": [ 00:27:47.146 { 00:27:47.146 "trtype": "TCP" 00:27:47.146 } 00:27:47.146 ] 00:27:47.146 }, 00:27:47.146 { 00:27:47.146 "name": "nvmf_tgt_poll_group_002", 00:27:47.146 "admin_qpairs": 0, 00:27:47.146 "io_qpairs": 1, 00:27:47.146 "current_admin_qpairs": 0, 00:27:47.146 "current_io_qpairs": 1, 00:27:47.146 "pending_bdev_io": 0, 00:27:47.146 "completed_nvme_io": 20821, 00:27:47.146 "transports": [ 00:27:47.146 { 00:27:47.146 "trtype": "TCP" 00:27:47.146 } 00:27:47.146 ] 00:27:47.146 }, 00:27:47.146 { 00:27:47.146 "name": "nvmf_tgt_poll_group_003", 00:27:47.146 "admin_qpairs": 0, 00:27:47.146 "io_qpairs": 1, 00:27:47.146 "current_admin_qpairs": 0, 00:27:47.146 "current_io_qpairs": 1, 00:27:47.146 "pending_bdev_io": 0, 00:27:47.146 "completed_nvme_io": 20539, 00:27:47.146 "transports": [ 00:27:47.146 { 00:27:47.146 "trtype": "TCP" 00:27:47.146 } 00:27:47.146 ] 00:27:47.146 } 00:27:47.146 ] 00:27:47.146 }' 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:47.146 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1082696 00:27:55.262 Initializing NVMe Controllers 00:27:55.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:55.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:55.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:55.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:55.262 Initialization complete. Launching workers. 00:27:55.262 ======================================================== 00:27:55.262 Latency(us) 00:27:55.262 Device Information : IOPS MiB/s Average min max 00:27:55.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10925.82 42.68 5857.78 2247.00 9931.30 00:27:55.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10797.42 42.18 5927.09 1932.77 12626.81 00:27:55.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10688.12 41.75 5988.61 2116.61 10575.63 00:27:55.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10764.82 42.05 5945.51 1874.36 10453.46 00:27:55.262 ======================================================== 00:27:55.262 Total : 43176.17 168.66 5929.37 1874.36 12626.81 00:27:55.262 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.262 rmmod nvme_tcp 00:27:55.262 rmmod nvme_fabrics 00:27:55.262 rmmod nvme_keyring 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1082500 ']' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1082500 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1082500 ']' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1082500 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1082500 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1082500' 00:27:55.262 killing process with pid 1082500 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1082500 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1082500 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.262 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.174 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:57.174 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:57.174 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:57.174 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:58.551 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:01.085 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:06.359 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:06.359 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:06.359 Found net devices under 0000:af:00.0: cvl_0_0 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:06.359 Found net devices under 0000:af:00.1: cvl_0_1 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.359 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:28:06.360 00:28:06.360 --- 10.0.0.2 ping statistics --- 00:28:06.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.360 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:28:06.360 00:28:06.360 --- 10.0.0.1 ping statistics --- 00:28:06.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.360 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:06.360 net.core.busy_poll = 1 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:06.360 net.core.busy_read = 1 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1086511 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1086511 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1086511 ']' 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.360 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 [2024-12-16 16:33:54.975692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:06.620 [2024-12-16 16:33:54.975733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.620 [2024-12-16 16:33:55.054351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.620 [2024-12-16 16:33:55.076971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.620 [2024-12-16 16:33:55.077010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.620 [2024-12-16 16:33:55.077017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.620 [2024-12-16 16:33:55.077026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.620 [2024-12-16 16:33:55.077031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.620 [2024-12-16 16:33:55.078503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.620 [2024-12-16 16:33:55.078534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.620 [2024-12-16 16:33:55.078633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.620 [2024-12-16 16:33:55.078634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.620 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 [2024-12-16 16:33:55.286673] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 Malloc1 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.879 [2024-12-16 16:33:55.346565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1086536 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:06.879 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:08.781 "tick_rate": 2100000000, 00:28:08.781 "poll_groups": [ 00:28:08.781 { 00:28:08.781 "name": "nvmf_tgt_poll_group_000", 00:28:08.781 "admin_qpairs": 1, 00:28:08.781 "io_qpairs": 3, 00:28:08.781 "current_admin_qpairs": 1, 00:28:08.781 "current_io_qpairs": 3, 00:28:08.781 "pending_bdev_io": 0, 00:28:08.781 "completed_nvme_io": 29451, 00:28:08.781 "transports": [ 00:28:08.781 { 00:28:08.781 "trtype": "TCP" 00:28:08.781 } 00:28:08.781 ] 00:28:08.781 }, 00:28:08.781 { 00:28:08.781 "name": "nvmf_tgt_poll_group_001", 00:28:08.781 "admin_qpairs": 0, 00:28:08.781 "io_qpairs": 1, 00:28:08.781 "current_admin_qpairs": 0, 00:28:08.781 "current_io_qpairs": 1, 00:28:08.781 "pending_bdev_io": 0, 00:28:08.781 "completed_nvme_io": 27658, 00:28:08.781 "transports": [ 00:28:08.781 { 00:28:08.781 "trtype": "TCP" 00:28:08.781 } 00:28:08.781 ] 00:28:08.781 }, 00:28:08.781 { 00:28:08.781 "name": "nvmf_tgt_poll_group_002", 00:28:08.781 "admin_qpairs": 0, 00:28:08.781 "io_qpairs": 0, 00:28:08.781 "current_admin_qpairs": 0, 00:28:08.781 "current_io_qpairs": 0, 00:28:08.781 "pending_bdev_io": 0, 00:28:08.781 "completed_nvme_io": 0, 00:28:08.781 "transports": [ 00:28:08.781 { 00:28:08.781 "trtype": "TCP" 00:28:08.781 } 00:28:08.781 ] 00:28:08.781 }, 00:28:08.781 { 00:28:08.781 "name": "nvmf_tgt_poll_group_003", 00:28:08.781 "admin_qpairs": 0, 00:28:08.781 "io_qpairs": 0, 00:28:08.781 "current_admin_qpairs": 0, 00:28:08.781 "current_io_qpairs": 0, 00:28:08.781 "pending_bdev_io": 0, 00:28:08.781 "completed_nvme_io": 0, 00:28:08.781 "transports": [ 00:28:08.781 { 00:28:08.781 "trtype": "TCP" 00:28:08.781 } 00:28:08.781 ] 00:28:08.781 } 00:28:08.781 ] 00:28:08.781 }' 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:08.781 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:09.039 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:09.039 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:09.039 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1086536 00:28:17.154 Initializing NVMe Controllers 00:28:17.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:17.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:17.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:17.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:17.154 Initialization complete. Launching workers. 00:28:17.154 ======================================================== 00:28:17.154 Latency(us) 00:28:17.154 Device Information : IOPS MiB/s Average min max 00:28:17.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4601.70 17.98 13935.60 1586.33 60752.24 00:28:17.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5597.30 21.86 11436.74 1508.82 56940.08 00:28:17.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5605.40 21.90 11420.96 1817.55 59798.81 00:28:17.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15178.70 59.29 4215.80 1358.12 45483.36 00:28:17.154 ======================================================== 00:28:17.154 Total : 30983.10 121.03 8267.47 1358.12 60752.24 00:28:17.154 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.154 rmmod nvme_tcp 00:28:17.154 rmmod nvme_fabrics 00:28:17.154 rmmod nvme_keyring 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1086511 ']' 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1086511 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1086511 ']' 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1086511 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086511 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.154 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086511' 00:28:17.154 killing process with pid 1086511 00:28:17.155 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1086511 00:28:17.155 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1086511 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.414 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:20.703 00:28:20.703 real 0m50.859s 00:28:20.703 user 2m43.717s 00:28:20.703 sys 0m10.263s 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.703 ************************************ 00:28:20.703 END TEST nvmf_perf_adq 00:28:20.703 ************************************ 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.703 16:34:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.703 ************************************ 00:28:20.703 START TEST nvmf_shutdown 00:28:20.703 ************************************ 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:20.703 * Looking for test storage... 00:28:20.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.703 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.704 --rc genhtml_branch_coverage=1 00:28:20.704 --rc genhtml_function_coverage=1 00:28:20.704 --rc genhtml_legend=1 00:28:20.704 --rc geninfo_all_blocks=1 00:28:20.704 --rc geninfo_unexecuted_blocks=1 00:28:20.704 00:28:20.704 ' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.704 --rc genhtml_branch_coverage=1 00:28:20.704 --rc genhtml_function_coverage=1 00:28:20.704 --rc genhtml_legend=1 00:28:20.704 --rc geninfo_all_blocks=1 00:28:20.704 --rc geninfo_unexecuted_blocks=1 00:28:20.704 00:28:20.704 ' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.704 --rc genhtml_branch_coverage=1 00:28:20.704 --rc genhtml_function_coverage=1 00:28:20.704 --rc genhtml_legend=1 00:28:20.704 --rc geninfo_all_blocks=1 00:28:20.704 --rc geninfo_unexecuted_blocks=1 00:28:20.704 00:28:20.704 ' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.704 --rc genhtml_branch_coverage=1 00:28:20.704 --rc genhtml_function_coverage=1 00:28:20.704 --rc genhtml_legend=1 00:28:20.704 --rc geninfo_all_blocks=1 00:28:20.704 --rc geninfo_unexecuted_blocks=1 00:28:20.704 00:28:20.704 ' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:20.704 ************************************ 00:28:20.704 START TEST nvmf_shutdown_tc1 00:28:20.704 ************************************ 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.704 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.277 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:27.278 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:27.278 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:27.278 Found net devices under 0000:af:00.0: cvl_0_0 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:27.278 Found net devices under 0000:af:00.1: cvl_0_1 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.278 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:28:27.278 00:28:27.278 --- 10.0.0.2 ping statistics --- 00:28:27.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.278 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:28:27.278 00:28:27.278 --- 10.0.0.1 ping statistics --- 00:28:27.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.278 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1091870 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1091870 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1091870 ']' 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.278 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.278 [2024-12-16 16:34:15.227006] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:27.278 [2024-12-16 16:34:15.227052] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.279 [2024-12-16 16:34:15.304821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.279 [2024-12-16 16:34:15.326944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.279 [2024-12-16 16:34:15.326983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.279 [2024-12-16 16:34:15.326991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.279 [2024-12-16 16:34:15.326996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.279 [2024-12-16 16:34:15.327001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.279 [2024-12-16 16:34:15.328517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.279 [2024-12-16 16:34:15.328627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.279 [2024-12-16 16:34:15.328710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.279 [2024-12-16 16:34:15.328711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 [2024-12-16 16:34:15.468583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.279 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 Malloc1 00:28:27.279 [2024-12-16 16:34:15.584376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.279 Malloc2 00:28:27.279 Malloc3 00:28:27.279 Malloc4 00:28:27.279 Malloc5 00:28:27.279 Malloc6 00:28:27.279 Malloc7 00:28:27.279 Malloc8 00:28:27.539 Malloc9 00:28:27.539 Malloc10 00:28:27.539 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.539 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:27.539 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.539 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1092141 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1092141 /var/tmp/bdevperf.sock 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1092141 ']' 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.539 { 00:28:27.539 "params": { 00:28:27.539 "name": "Nvme$subsystem", 00:28:27.539 "trtype": "$TEST_TRANSPORT", 00:28:27.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.539 "adrfam": "ipv4", 00:28:27.539 "trsvcid": "$NVMF_PORT", 00:28:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.539 "hdgst": ${hdgst:-false}, 00:28:27.539 "ddgst": ${ddgst:-false} 00:28:27.539 }, 00:28:27.539 "method": "bdev_nvme_attach_controller" 00:28:27.539 } 00:28:27.539 EOF 00:28:27.539 )") 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.539 { 00:28:27.539 "params": { 00:28:27.539 "name": "Nvme$subsystem", 00:28:27.539 "trtype": "$TEST_TRANSPORT", 00:28:27.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.539 "adrfam": "ipv4", 00:28:27.539 "trsvcid": "$NVMF_PORT", 00:28:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.539 "hdgst": ${hdgst:-false}, 00:28:27.539 "ddgst": ${ddgst:-false} 00:28:27.539 }, 00:28:27.539 "method": "bdev_nvme_attach_controller" 00:28:27.539 } 00:28:27.539 EOF 00:28:27.539 )") 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.539 { 00:28:27.539 "params": { 00:28:27.539 "name": "Nvme$subsystem", 00:28:27.539 "trtype": "$TEST_TRANSPORT", 00:28:27.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.539 "adrfam": "ipv4", 00:28:27.539 "trsvcid": "$NVMF_PORT", 00:28:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.539 "hdgst": ${hdgst:-false}, 00:28:27.539 "ddgst": ${ddgst:-false} 00:28:27.539 }, 00:28:27.539 "method": "bdev_nvme_attach_controller" 00:28:27.539 } 00:28:27.539 EOF 00:28:27.539 )") 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.539 { 00:28:27.539 "params": { 00:28:27.539 "name": "Nvme$subsystem", 00:28:27.539 "trtype": "$TEST_TRANSPORT", 00:28:27.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.539 "adrfam": "ipv4", 00:28:27.539 "trsvcid": "$NVMF_PORT", 00:28:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.539 "hdgst": ${hdgst:-false}, 00:28:27.539 "ddgst": ${ddgst:-false} 00:28:27.539 }, 00:28:27.539 "method": "bdev_nvme_attach_controller" 00:28:27.539 } 00:28:27.539 EOF 00:28:27.539 )") 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.539 { 00:28:27.539 "params": { 00:28:27.539 "name": "Nvme$subsystem", 00:28:27.539 "trtype": "$TEST_TRANSPORT", 00:28:27.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.539 "adrfam": "ipv4", 00:28:27.539 "trsvcid": "$NVMF_PORT", 00:28:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.539 "hdgst": ${hdgst:-false}, 00:28:27.539 "ddgst": ${ddgst:-false} 00:28:27.539 }, 00:28:27.539 "method": "bdev_nvme_attach_controller" 00:28:27.539 } 00:28:27.539 EOF 00:28:27.539 )") 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.539 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.539 { 00:28:27.539 "params": { 00:28:27.539 "name": "Nvme$subsystem", 00:28:27.539 "trtype": "$TEST_TRANSPORT", 00:28:27.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.539 "adrfam": "ipv4", 00:28:27.539 "trsvcid": "$NVMF_PORT", 00:28:27.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.539 "hdgst": ${hdgst:-false}, 00:28:27.539 "ddgst": ${ddgst:-false} 00:28:27.539 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 } 00:28:27.540 EOF 00:28:27.540 )") 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.540 { 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme$subsystem", 00:28:27.540 "trtype": "$TEST_TRANSPORT", 00:28:27.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "$NVMF_PORT", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.540 "hdgst": ${hdgst:-false}, 00:28:27.540 "ddgst": ${ddgst:-false} 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 } 00:28:27.540 EOF 00:28:27.540 )") 00:28:27.540 [2024-12-16 16:34:16.056789] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:27.540 [2024-12-16 16:34:16.056838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.540 { 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme$subsystem", 00:28:27.540 "trtype": "$TEST_TRANSPORT", 00:28:27.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "$NVMF_PORT", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.540 "hdgst": ${hdgst:-false}, 00:28:27.540 "ddgst": ${ddgst:-false} 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 } 00:28:27.540 EOF 00:28:27.540 )") 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.540 { 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme$subsystem", 00:28:27.540 "trtype": "$TEST_TRANSPORT", 00:28:27.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "$NVMF_PORT", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.540 "hdgst": ${hdgst:-false}, 00:28:27.540 "ddgst": ${ddgst:-false} 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 } 00:28:27.540 EOF 00:28:27.540 )") 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.540 { 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme$subsystem", 00:28:27.540 "trtype": "$TEST_TRANSPORT", 00:28:27.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "$NVMF_PORT", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.540 "hdgst": ${hdgst:-false}, 00:28:27.540 "ddgst": ${ddgst:-false} 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 } 00:28:27.540 EOF 00:28:27.540 )") 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:27.540 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme1", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme2", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme3", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme4", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme5", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme6", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme7", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme8", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme9", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 },{ 00:28:27.540 "params": { 00:28:27.540 "name": "Nvme10", 00:28:27.540 "trtype": "tcp", 00:28:27.540 "traddr": "10.0.0.2", 00:28:27.540 "adrfam": "ipv4", 00:28:27.540 "trsvcid": "4420", 00:28:27.540 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:27.540 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:27.540 "hdgst": false, 00:28:27.540 "ddgst": false 00:28:27.540 }, 00:28:27.540 "method": "bdev_nvme_attach_controller" 00:28:27.540 }' 00:28:27.540 [2024-12-16 16:34:16.134986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.799 [2024-12-16 16:34:16.157690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1092141 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:29.703 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:30.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1092141 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:30.639 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1091870 00:28:30.639 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:30.639 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:30.639 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:30.639 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 [2024-12-16 16:34:18.982720] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:30.640 [2024-12-16 16:34:18.982773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092617 ] 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.640 { 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme$subsystem", 00:28:30.640 "trtype": "$TEST_TRANSPORT", 00:28:30.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "$NVMF_PORT", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.640 "hdgst": ${hdgst:-false}, 00:28:30.640 "ddgst": ${ddgst:-false} 00:28:30.640 }, 00:28:30.640 "method": "bdev_nvme_attach_controller" 00:28:30.640 } 00:28:30.640 EOF 00:28:30.640 )") 00:28:30.640 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:30.640 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:30.640 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:30.640 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:30.640 "params": { 00:28:30.640 "name": "Nvme1", 00:28:30.640 "trtype": "tcp", 00:28:30.640 "traddr": "10.0.0.2", 00:28:30.640 "adrfam": "ipv4", 00:28:30.640 "trsvcid": "4420", 00:28:30.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.640 "hdgst": false, 00:28:30.640 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme2", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme3", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme4", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme5", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme6", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme7", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme8", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme9", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 },{ 00:28:30.641 "params": { 00:28:30.641 "name": "Nvme10", 00:28:30.641 "trtype": "tcp", 00:28:30.641 "traddr": "10.0.0.2", 00:28:30.641 "adrfam": "ipv4", 00:28:30.641 "trsvcid": "4420", 00:28:30.641 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:30.641 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:30.641 "hdgst": false, 00:28:30.641 "ddgst": false 00:28:30.641 }, 00:28:30.641 "method": "bdev_nvme_attach_controller" 00:28:30.641 }' 00:28:30.641 [2024-12-16 16:34:19.058844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.641 [2024-12-16 16:34:19.081464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.016 Running I/O for 1 seconds... 00:28:33.244 2254.00 IOPS, 140.88 MiB/s 00:28:33.244 Latency(us) 00:28:33.244 [2024-12-16T15:34:21.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.244 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme1n1 : 1.15 283.79 17.74 0.00 0.00 221263.65 10298.51 209715.20 00:28:33.244 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme2n1 : 1.15 277.06 17.32 0.00 0.00 224830.03 18100.42 232684.01 00:28:33.244 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme3n1 : 1.15 280.72 17.54 0.00 0.00 219375.24 3183.18 215707.06 00:28:33.244 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme4n1 : 1.13 286.69 17.92 0.00 0.00 204919.44 16227.96 209715.20 00:28:33.244 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme5n1 : 1.16 275.29 17.21 0.00 0.00 218235.66 18474.91 222697.57 00:28:33.244 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme6n1 : 1.15 282.98 17.69 0.00 0.00 207446.65 8925.38 203723.34 00:28:33.244 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme7n1 : 1.16 275.85 17.24 0.00 0.00 211530.31 15104.49 227690.79 00:28:33.244 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme8n1 : 1.17 274.51 17.16 0.00 0.00 209614.65 14355.50 226692.14 00:28:33.244 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme9n1 : 1.17 272.90 17.06 0.00 0.00 207347.71 29959.31 217704.35 00:28:33.244 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.244 Verification LBA range: start 0x0 length 0x400 00:28:33.244 Nvme10n1 : 1.17 273.30 17.08 0.00 0.00 204640.26 16352.79 232684.01 00:28:33.244 [2024-12-16T15:34:21.853Z] =================================================================================================================== 00:28:33.244 [2024-12-16T15:34:21.853Z] Total : 2783.09 173.94 0.00 0.00 212920.89 3183.18 232684.01 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:33.244 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:33.244 rmmod nvme_tcp 00:28:33.244 rmmod nvme_fabrics 00:28:33.576 rmmod nvme_keyring 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1091870 ']' 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1091870 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1091870 ']' 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1091870 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091870 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091870' 00:28:33.576 killing process with pid 1091870 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1091870 00:28:33.576 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1091870 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.884 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.790 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:35.790 00:28:35.790 real 0m15.114s 00:28:35.790 user 0m33.667s 00:28:35.790 sys 0m5.761s 00:28:35.790 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.790 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.790 ************************************ 00:28:35.790 END TEST nvmf_shutdown_tc1 00:28:35.790 ************************************ 00:28:36.049 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:36.049 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.050 ************************************ 00:28:36.050 START TEST nvmf_shutdown_tc2 00:28:36.050 ************************************ 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:36.050 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:36.050 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:36.050 Found net devices under 0000:af:00.0: cvl_0_0 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:36.050 Found net devices under 0000:af:00.1: cvl_0_1 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.050 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.051 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:28:36.310 00:28:36.310 --- 10.0.0.2 ping statistics --- 00:28:36.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.310 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:28:36.310 00:28:36.310 --- 10.0.0.1 ping statistics --- 00:28:36.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.310 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1093627 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1093627 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093627 ']' 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.310 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.310 [2024-12-16 16:34:24.873844] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:36.310 [2024-12-16 16:34:24.873886] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.570 [2024-12-16 16:34:24.938031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.570 [2024-12-16 16:34:24.960777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.570 [2024-12-16 16:34:24.960814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.570 [2024-12-16 16:34:24.960822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.570 [2024-12-16 16:34:24.960828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.570 [2024-12-16 16:34:24.960834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.570 [2024-12-16 16:34:24.962130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.570 [2024-12-16 16:34:24.962185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.570 [2024-12-16 16:34:24.962295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.570 [2024-12-16 16:34:24.962297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.570 [2024-12-16 16:34:25.101727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.570 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:36.829 Malloc1 00:28:36.829 [2024-12-16 16:34:25.209719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.829 Malloc2 00:28:36.829 Malloc3 00:28:36.829 Malloc4 00:28:36.829 Malloc5 00:28:36.829 Malloc6 00:28:37.087 Malloc7 00:28:37.087 Malloc8 00:28:37.087 Malloc9 00:28:37.087 Malloc10 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1093801 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1093801 /var/tmp/bdevperf.sock 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093801 ']' 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.087 { 00:28:37.087 "params": { 00:28:37.087 "name": "Nvme$subsystem", 00:28:37.087 "trtype": "$TEST_TRANSPORT", 00:28:37.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.087 "adrfam": "ipv4", 00:28:37.087 "trsvcid": "$NVMF_PORT", 00:28:37.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.087 "hdgst": ${hdgst:-false}, 00:28:37.087 "ddgst": ${ddgst:-false} 00:28:37.087 }, 00:28:37.087 "method": "bdev_nvme_attach_controller" 00:28:37.087 } 00:28:37.087 EOF 00:28:37.087 )") 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.087 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.087 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 [2024-12-16 16:34:25.681404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:37.088 [2024-12-16 16:34:25.681452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093801 ] 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.088 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.088 { 00:28:37.088 "params": { 00:28:37.088 "name": "Nvme$subsystem", 00:28:37.088 "trtype": "$TEST_TRANSPORT", 00:28:37.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.088 "adrfam": "ipv4", 00:28:37.088 "trsvcid": "$NVMF_PORT", 00:28:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.088 "hdgst": ${hdgst:-false}, 00:28:37.088 "ddgst": ${ddgst:-false} 00:28:37.088 }, 00:28:37.088 "method": "bdev_nvme_attach_controller" 00:28:37.088 } 00:28:37.088 EOF 00:28:37.088 )") 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.347 { 00:28:37.347 "params": { 00:28:37.347 "name": "Nvme$subsystem", 00:28:37.347 "trtype": "$TEST_TRANSPORT", 00:28:37.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.347 "adrfam": "ipv4", 00:28:37.347 "trsvcid": "$NVMF_PORT", 00:28:37.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.347 "hdgst": ${hdgst:-false}, 00:28:37.347 "ddgst": ${ddgst:-false} 00:28:37.347 }, 00:28:37.347 "method": "bdev_nvme_attach_controller" 00:28:37.347 } 00:28:37.347 EOF 00:28:37.347 )") 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.347 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.347 "params": { 00:28:37.347 "name": "Nvme1", 00:28:37.347 "trtype": "tcp", 00:28:37.347 "traddr": "10.0.0.2", 00:28:37.347 "adrfam": "ipv4", 00:28:37.347 "trsvcid": "4420", 00:28:37.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.347 "hdgst": false, 00:28:37.347 "ddgst": false 00:28:37.347 }, 00:28:37.347 "method": "bdev_nvme_attach_controller" 00:28:37.347 },{ 00:28:37.347 "params": { 00:28:37.347 "name": "Nvme2", 00:28:37.347 "trtype": "tcp", 00:28:37.347 "traddr": "10.0.0.2", 00:28:37.347 "adrfam": "ipv4", 00:28:37.347 "trsvcid": "4420", 00:28:37.347 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.347 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.347 "hdgst": false, 00:28:37.347 "ddgst": false 00:28:37.347 }, 00:28:37.347 "method": "bdev_nvme_attach_controller" 00:28:37.347 },{ 00:28:37.347 "params": { 00:28:37.347 "name": "Nvme3", 00:28:37.347 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme4", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme5", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme6", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme7", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme8", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme9", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 },{ 00:28:37.348 "params": { 00:28:37.348 "name": "Nvme10", 00:28:37.348 "trtype": "tcp", 00:28:37.348 "traddr": "10.0.0.2", 00:28:37.348 "adrfam": "ipv4", 00:28:37.348 "trsvcid": "4420", 00:28:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.348 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.348 "hdgst": false, 00:28:37.348 "ddgst": false 00:28:37.348 }, 00:28:37.348 "method": "bdev_nvme_attach_controller" 00:28:37.348 }' 00:28:37.348 [2024-12-16 16:34:25.761038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.348 [2024-12-16 16:34:25.783790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.725 Running I/O for 10 seconds... 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:38.984 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=11 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 11 -ge 100 ']' 00:28:39.243 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1093801 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093801 ']' 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093801 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093801 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093801' 00:28:39.503 killing process with pid 1093801 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093801 00:28:39.503 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093801 00:28:39.503 Received shutdown signal, test time was about 0.819647 seconds 00:28:39.503 00:28:39.503 Latency(us) 00:28:39.503 [2024-12-16T15:34:28.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme1n1 : 0.79 241.74 15.11 0.00 0.00 261518.22 19723.22 222697.57 00:28:39.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme2n1 : 0.79 241.52 15.10 0.00 0.00 255168.45 18474.91 212711.13 00:28:39.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme3n1 : 0.80 319.28 19.95 0.00 0.00 189975.65 12170.97 214708.42 00:28:39.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme4n1 : 0.81 315.70 19.73 0.00 0.00 188704.18 14417.92 207717.91 00:28:39.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme5n1 : 0.82 313.77 19.61 0.00 0.00 186083.72 20846.69 203723.34 00:28:39.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme6n1 : 0.82 312.58 19.54 0.00 0.00 182931.75 15728.64 214708.42 00:28:39.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme7n1 : 0.81 315.40 19.71 0.00 0.00 176863.09 23343.30 212711.13 00:28:39.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme8n1 : 0.78 245.09 15.32 0.00 0.00 221731.76 16602.45 211712.49 00:28:39.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.503 Nvme9n1 : 0.80 239.87 14.99 0.00 0.00 222410.20 32455.92 202724.69 00:28:39.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.503 Verification LBA range: start 0x0 length 0x400 00:28:39.504 Nvme10n1 : 0.81 238.34 14.90 0.00 0.00 219000.12 24466.77 240673.16 00:28:39.504 [2024-12-16T15:34:28.113Z] =================================================================================================================== 00:28:39.504 [2024-12-16T15:34:28.113Z] Total : 2783.29 173.96 0.00 0.00 206791.99 12170.97 240673.16 00:28:39.763 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1093627 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.699 rmmod nvme_tcp 00:28:40.699 rmmod nvme_fabrics 00:28:40.699 rmmod nvme_keyring 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1093627 ']' 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1093627 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093627 ']' 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093627 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093627 00:28:40.959 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.959 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.959 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093627' 00:28:40.959 killing process with pid 1093627 00:28:40.959 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093627 00:28:40.959 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093627 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.218 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.753 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.753 00:28:43.753 real 0m7.325s 00:28:43.753 user 0m21.358s 00:28:43.753 sys 0m1.334s 00:28:43.753 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.753 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.753 ************************************ 00:28:43.753 END TEST nvmf_shutdown_tc2 00:28:43.753 ************************************ 00:28:43.753 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:43.754 ************************************ 00:28:43.754 START TEST nvmf_shutdown_tc3 00:28:43.754 ************************************ 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.754 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.754 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.754 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.754 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.754 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.755 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:28:43.755 00:28:43.755 --- 10.0.0.2 ping statistics --- 00:28:43.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.755 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:43.755 00:28:43.755 --- 10.0.0.1 ping statistics --- 00:28:43.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.755 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1094914 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1094914 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1094914 ']' 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.755 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.755 [2024-12-16 16:34:32.204406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:43.755 [2024-12-16 16:34:32.204449] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.755 [2024-12-16 16:34:32.271033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:43.755 [2024-12-16 16:34:32.292633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.755 [2024-12-16 16:34:32.292674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.755 [2024-12-16 16:34:32.292682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.755 [2024-12-16 16:34:32.292688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.755 [2024-12-16 16:34:32.292692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.755 [2024-12-16 16:34:32.294022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.755 [2024-12-16 16:34:32.294128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.755 [2024-12-16 16:34:32.294235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:43.755 [2024-12-16 16:34:32.294235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.014 [2024-12-16 16:34:32.433874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.014 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.014 Malloc1 00:28:44.014 [2024-12-16 16:34:32.540946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.014 Malloc2 00:28:44.014 Malloc3 00:28:44.273 Malloc4 00:28:44.273 Malloc5 00:28:44.273 Malloc6 00:28:44.273 Malloc7 00:28:44.273 Malloc8 00:28:44.273 Malloc9 00:28:44.532 Malloc10 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1095179 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1095179 /var/tmp/bdevperf.sock 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095179 ']' 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.532 { 00:28:44.532 "params": { 00:28:44.532 "name": "Nvme$subsystem", 00:28:44.532 "trtype": "$TEST_TRANSPORT", 00:28:44.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.532 "adrfam": "ipv4", 00:28:44.532 "trsvcid": "$NVMF_PORT", 00:28:44.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.532 "hdgst": ${hdgst:-false}, 00:28:44.532 "ddgst": ${ddgst:-false} 00:28:44.532 }, 00:28:44.532 "method": "bdev_nvme_attach_controller" 00:28:44.532 } 00:28:44.532 EOF 00:28:44.532 )") 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.532 { 00:28:44.532 "params": { 00:28:44.532 "name": "Nvme$subsystem", 00:28:44.532 "trtype": "$TEST_TRANSPORT", 00:28:44.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.532 "adrfam": "ipv4", 00:28:44.532 "trsvcid": "$NVMF_PORT", 00:28:44.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.532 "hdgst": ${hdgst:-false}, 00:28:44.532 "ddgst": ${ddgst:-false} 00:28:44.532 }, 00:28:44.532 "method": "bdev_nvme_attach_controller" 00:28:44.532 } 00:28:44.532 EOF 00:28:44.532 )") 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.532 { 00:28:44.532 "params": { 00:28:44.532 "name": "Nvme$subsystem", 00:28:44.532 "trtype": "$TEST_TRANSPORT", 00:28:44.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.532 "adrfam": "ipv4", 00:28:44.532 "trsvcid": "$NVMF_PORT", 00:28:44.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.532 "hdgst": ${hdgst:-false}, 00:28:44.532 "ddgst": ${ddgst:-false} 00:28:44.532 }, 00:28:44.532 "method": "bdev_nvme_attach_controller" 00:28:44.532 } 00:28:44.532 EOF 00:28:44.532 )") 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.532 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.532 { 00:28:44.532 "params": { 00:28:44.532 "name": "Nvme$subsystem", 00:28:44.532 "trtype": "$TEST_TRANSPORT", 00:28:44.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.532 "adrfam": "ipv4", 00:28:44.532 "trsvcid": "$NVMF_PORT", 00:28:44.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.532 "hdgst": ${hdgst:-false}, 00:28:44.532 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.533 { 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme$subsystem", 00:28:44.533 "trtype": "$TEST_TRANSPORT", 00:28:44.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "$NVMF_PORT", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.533 "hdgst": ${hdgst:-false}, 00:28:44.533 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.533 { 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme$subsystem", 00:28:44.533 "trtype": "$TEST_TRANSPORT", 00:28:44.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "$NVMF_PORT", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.533 "hdgst": ${hdgst:-false}, 00:28:44.533 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.533 { 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme$subsystem", 00:28:44.533 "trtype": "$TEST_TRANSPORT", 00:28:44.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "$NVMF_PORT", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.533 "hdgst": ${hdgst:-false}, 00:28:44.533 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 [2024-12-16 16:34:33.016832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:44.533 [2024-12-16 16:34:33.016880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095179 ] 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.533 { 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme$subsystem", 00:28:44.533 "trtype": "$TEST_TRANSPORT", 00:28:44.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "$NVMF_PORT", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.533 "hdgst": ${hdgst:-false}, 00:28:44.533 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.533 { 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme$subsystem", 00:28:44.533 "trtype": "$TEST_TRANSPORT", 00:28:44.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "$NVMF_PORT", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.533 "hdgst": ${hdgst:-false}, 00:28:44.533 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.533 { 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme$subsystem", 00:28:44.533 "trtype": "$TEST_TRANSPORT", 00:28:44.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "$NVMF_PORT", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.533 "hdgst": ${hdgst:-false}, 00:28:44.533 "ddgst": ${ddgst:-false} 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 } 00:28:44.533 EOF 00:28:44.533 )") 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:44.533 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme1", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme2", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme3", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme4", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme5", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme6", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme7", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.533 },{ 00:28:44.533 "params": { 00:28:44.533 "name": "Nvme8", 00:28:44.533 "trtype": "tcp", 00:28:44.533 "traddr": "10.0.0.2", 00:28:44.533 "adrfam": "ipv4", 00:28:44.533 "trsvcid": "4420", 00:28:44.533 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:44.533 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:44.533 "hdgst": false, 00:28:44.533 "ddgst": false 00:28:44.533 }, 00:28:44.533 "method": "bdev_nvme_attach_controller" 00:28:44.534 },{ 00:28:44.534 "params": { 00:28:44.534 "name": "Nvme9", 00:28:44.534 "trtype": "tcp", 00:28:44.534 "traddr": "10.0.0.2", 00:28:44.534 "adrfam": "ipv4", 00:28:44.534 "trsvcid": "4420", 00:28:44.534 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:44.534 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:44.534 "hdgst": false, 00:28:44.534 "ddgst": false 00:28:44.534 }, 00:28:44.534 "method": "bdev_nvme_attach_controller" 00:28:44.534 },{ 00:28:44.534 "params": { 00:28:44.534 "name": "Nvme10", 00:28:44.534 "trtype": "tcp", 00:28:44.534 "traddr": "10.0.0.2", 00:28:44.534 "adrfam": "ipv4", 00:28:44.534 "trsvcid": "4420", 00:28:44.534 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:44.534 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:44.534 "hdgst": false, 00:28:44.534 "ddgst": false 00:28:44.534 }, 00:28:44.534 "method": "bdev_nvme_attach_controller" 00:28:44.534 }' 00:28:44.534 [2024-12-16 16:34:33.092285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.534 [2024-12-16 16:34:33.114672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.436 Running I/O for 10 seconds... 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:46.437 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1094914 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1094914 ']' 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1094914 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.695 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094914 00:28:46.970 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.970 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.970 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094914' 00:28:46.970 killing process with pid 1094914 00:28:46.970 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1094914 00:28:46.970 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1094914 00:28:46.970 [2024-12-16 16:34:35.328810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.328994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.970 [2024-12-16 16:34:35.329176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.329303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f00 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.335578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c980 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.971 [2024-12-16 16:34:35.336349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.336687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.972 [2024-12-16 16:34:35.339268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.339465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b280 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.973 [2024-12-16 16:34:35.340758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.340816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b770 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.341999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.342376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216baf0 is same with the state(6) to be set 00:28:46.974 [2024-12-16 16:34:35.343180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-16 16:34:35.343286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-16 16:34:35.343344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 16:34:35.343354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727530 is same [2024-12-16 16:34:35.343363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with with the state(6) to be set 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec610 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 16:34:35.343496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-16 16:34:35.343508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 16:34:35.343518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with [2024-12-16 16:34:35.343527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:28:46.975 id:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.975 [2024-12-16 16:34:35.343558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2704de0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.975 [2024-12-16 16:34:35.343590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.975 [2024-12-16 16:34:35.343594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 16:34:35.343610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with [2024-12-16 16:34:35.343641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:28:46.976 id:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270c370 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216bfc0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27410b0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6630 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddca0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.343924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.343974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.343980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e1140 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.344007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.344023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.344037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.976 [2024-12-16 16:34:35.344050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e0cd0 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.976 [2024-12-16 16:34:35.344231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.976 [2024-12-16 16:34:35.344251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.976 [2024-12-16 16:34:35.344266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-12-16 16:34:35.344281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.976 the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.976 [2024-12-16 16:34:35.344306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.976 [2024-12-16 16:34:35.344313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.976 [2024-12-16 16:34:35.344317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-12-16 16:34:35.344386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 16:34:35.344397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-12-16 16:34:35.344407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 16:34:35.344418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with [2024-12-16 16:34:35.344429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1the state(6) to be set 00:28:46.977 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with [2024-12-16 16:34:35.344467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1the state(6) to be set 00:28:46.977 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with [2024-12-16 16:34:35.344477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:46.977 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with [2024-12-16 16:34:35.344606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(6) to be set 00:28:46.977 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.977 [2024-12-16 16:34:35.344650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.977 [2024-12-16 16:34:35.344658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.977 [2024-12-16 16:34:35.344663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c490 is same with the state(6) to be set 00:28:46.978 [2024-12-16 16:34:35.344716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.344987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.344995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.978 [2024-12-16 16:34:35.345218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.978 [2024-12-16 16:34:35.345226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.979 [2024-12-16 16:34:35.345866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.979 [2024-12-16 16:34:35.345873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.345881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.345887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.345895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.345901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.359986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.359997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.360006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.360017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.980 [2024-12-16 16:34:35.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.360571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.980 [2024-12-16 16:34:35.360601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.360612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.980 [2024-12-16 16:34:35.360621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.360632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.980 [2024-12-16 16:34:35.360645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.360655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.980 [2024-12-16 16:34:35.360665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.980 [2024-12-16 16:34:35.360675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2740e90 is same with the state(6) to be set 00:28:46.980 [2024-12-16 16:34:35.360701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2727530 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec610 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2704de0 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270c370 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27410b0 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6630 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddca0 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e1140 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.360844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e0cd0 (9): Bad file descriptor 00:28:46.980 [2024-12-16 16:34:35.363668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:46.980 [2024-12-16 16:34:35.363711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:46.981 [2024-12-16 16:34:35.363786] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.364317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-12-16 16:34:35.364349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d6630 with addr=10.0.0.2, port=4420 00:28:46.981 [2024-12-16 16:34:35.364362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6630 is same with the state(6) to be set 00:28:46.981 [2024-12-16 16:34:35.364465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-12-16 16:34:35.364480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddca0 with addr=10.0.0.2, port=4420 00:28:46.981 [2024-12-16 16:34:35.364491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddca0 is same with the state(6) to be set 00:28:46.981 [2024-12-16 16:34:35.365173] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.365235] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.365288] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.365461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6630 (9): Bad file descriptor 00:28:46.981 [2024-12-16 16:34:35.365480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddca0 (9): Bad file descriptor 00:28:46.981 [2024-12-16 16:34:35.365558] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.365667] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.365720] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:46.981 [2024-12-16 16:34:35.365751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:46.981 [2024-12-16 16:34:35.365764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:46.981 [2024-12-16 16:34:35.365775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:46.981 [2024-12-16 16:34:35.365786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:46.981 [2024-12-16 16:34:35.365798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:46.981 [2024-12-16 16:34:35.365807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:46.981 [2024-12-16 16:34:35.365816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:46.981 [2024-12-16 16:34:35.365824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:46.981 [2024-12-16 16:34:35.365933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.365949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.365969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.365981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.365993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.981 [2024-12-16 16:34:35.366602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.981 [2024-12-16 16:34:35.366613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.366985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.366997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.367333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.367343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257a8c0 is same with the state(6) to be set 00:28:46.982 [2024-12-16 16:34:35.368878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:46.982 [2024-12-16 16:34:35.369140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-12-16 16:34:35.369161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27410b0 with addr=10.0.0.2, port=4420 00:28:46.982 [2024-12-16 16:34:35.369171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27410b0 is same with the state(6) to be set 00:28:46.982 [2024-12-16 16:34:35.369439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27410b0 (9): Bad file descriptor 00:28:46.982 [2024-12-16 16:34:35.369485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:46.982 [2024-12-16 16:34:35.369497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:46.982 [2024-12-16 16:34:35.369506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:46.982 [2024-12-16 16:34:35.369514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:46.982 [2024-12-16 16:34:35.370538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2740e90 (9): Bad file descriptor 00:28:46.982 [2024-12-16 16:34:35.370675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.982 [2024-12-16 16:34:35.370702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.982 [2024-12-16 16:34:35.370710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.370990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.370998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.983 [2024-12-16 16:34:35.371332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.983 [2024-12-16 16:34:35.371340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.371778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.371786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e4eb0 is same with the state(6) to be set 00:28:46.984 [2024-12-16 16:34:35.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.372990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.373007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.373014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.373022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.373032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.373041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.373049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.373058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.984 [2024-12-16 16:34:35.373066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.984 [2024-12-16 16:34:35.373074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.985 [2024-12-16 16:34:35.373740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.985 [2024-12-16 16:34:35.373750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.373918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.373928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5e80 is same with the state(6) to be set 00:28:46.986 [2024-12-16 16:34:35.374976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.374989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.986 [2024-12-16 16:34:35.375480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.986 [2024-12-16 16:34:35.375489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.375989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.375997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.376007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.376014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.376023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.376031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.376042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.376050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.376059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.376067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.376075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e7350 is same with the state(6) to be set 00:28:46.987 [2024-12-16 16:34:35.377116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.377131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.377143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.377150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.377160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.377167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.377177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.377184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.377194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.987 [2024-12-16 16:34:35.377202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.987 [2024-12-16 16:34:35.377210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.988 [2024-12-16 16:34:35.377810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.988 [2024-12-16 16:34:35.377817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.377985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.377994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.378208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.378216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e8680 is same with the state(6) to be set 00:28:46.989 [2024-12-16 16:34:35.379259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.989 [2024-12-16 16:34:35.379535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.989 [2024-12-16 16:34:35.379542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.379993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.990 [2024-12-16 16:34:35.380188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.990 [2024-12-16 16:34:35.380197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.380300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.380307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x33e2600 is same with the state(6) to be set 00:28:46.991 [2024-12-16 16:34:35.381292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.991 [2024-12-16 16:34:35.381820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.991 [2024-12-16 16:34:35.381828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.381990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.992 [2024-12-16 16:34:35.382320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.992 [2024-12-16 16:34:35.382327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x362ff50 is same with the state(6) to be set 00:28:46.992 [2024-12-16 16:34:35.383300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:46.992 [2024-12-16 16:34:35.383321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:46.992 [2024-12-16 16:34:35.383333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:46.992 [2024-12-16 16:34:35.383344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:46.992 [2024-12-16 16:34:35.383419] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:46.992 [2024-12-16 16:34:35.383433] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:46.992 [2024-12-16 16:34:35.383505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:46.992 [2024-12-16 16:34:35.383518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:46.992 [2024-12-16 16:34:35.383775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-12-16 16:34:35.383791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e1140 with addr=10.0.0.2, port=4420 00:28:46.992 [2024-12-16 16:34:35.383800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e1140 is same with the state(6) to be set 00:28:46.992 [2024-12-16 16:34:35.383941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-12-16 16:34:35.383952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e0cd0 with addr=10.0.0.2, port=4420 00:28:46.993 [2024-12-16 16:34:35.383960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e0cd0 is same with the state(6) to be set 00:28:46.993 [2024-12-16 16:34:35.384113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-12-16 16:34:35.384125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2704de0 with addr=10.0.0.2, port=4420 00:28:46.993 [2024-12-16 16:34:35.384133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2704de0 is same with the state(6) to be set 00:28:46.993 [2024-12-16 16:34:35.384205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-12-16 16:34:35.384215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x270c370 with addr=10.0.0.2, port=4420 00:28:46.993 [2024-12-16 16:34:35.384223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270c370 is same with the state(6) to be set 00:28:46.993 [2024-12-16 16:34:35.385541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.385987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.385994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.993 [2024-12-16 16:34:35.386165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.993 [2024-12-16 16:34:35.386172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.994 [2024-12-16 16:34:35.386627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.994 [2024-12-16 16:34:35.386635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2579590 is same with the state(6) to be set 00:28:46.994 [2024-12-16 16:34:35.387601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:46.994 [2024-12-16 16:34:35.387619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:46.994 [2024-12-16 16:34:35.387629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:46.994 task offset: 24576 on job bdev=Nvme3n1 fails 00:28:46.994 00:28:46.994 Latency(us) 00:28:46.994 [2024-12-16T15:34:35.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.994 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme1n1 ended in about 0.74 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme1n1 : 0.74 173.68 10.85 86.84 0.00 242884.67 20472.20 225693.50 00:28:46.994 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme2n1 ended in about 0.74 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme2n1 : 0.74 173.18 10.82 86.59 0.00 238372.25 28086.86 224694.86 00:28:46.994 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme3n1 ended in about 0.73 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme3n1 : 0.73 264.37 16.52 88.12 0.00 171610.94 13544.11 216705.71 00:28:46.994 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme4n1 ended in about 0.73 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme4n1 : 0.73 272.19 17.01 87.98 0.00 164171.13 19348.72 202724.69 00:28:46.994 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme5n1 ended in about 0.74 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme5n1 : 0.74 172.68 10.79 86.34 0.00 223593.49 42192.70 176759.95 00:28:46.994 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme6n1 ended in about 0.74 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme6n1 : 0.74 172.18 10.76 86.09 0.00 219184.44 17601.10 215707.06 00:28:46.994 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme7n1 ended in about 0.75 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.994 Nvme7n1 : 0.75 178.41 11.15 85.85 0.00 209262.14 16103.13 189742.32 00:28:46.994 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.994 Job: Nvme8n1 ended in about 0.75 seconds with error 00:28:46.994 Verification LBA range: start 0x0 length 0x400 00:28:46.995 Nvme8n1 : 0.75 171.24 10.70 85.62 0.00 210364.22 14168.26 215707.06 00:28:46.995 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.995 Job: Nvme9n1 ended in about 0.75 seconds with error 00:28:46.995 Verification LBA range: start 0x0 length 0x400 00:28:46.995 Nvme9n1 : 0.75 170.26 10.64 85.13 0.00 206768.36 21096.35 220700.28 00:28:46.995 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.995 Job: Nvme10n1 ended in about 0.73 seconds with error 00:28:46.995 Verification LBA range: start 0x0 length 0x400 00:28:46.995 Nvme10n1 : 0.73 174.65 10.92 87.32 0.00 195348.32 17975.59 237677.23 00:28:46.995 [2024-12-16T15:34:35.604Z] =================================================================================================================== 00:28:46.995 [2024-12-16T15:34:35.604Z] Total : 1922.82 120.18 865.88 0.00 205527.39 13544.11 237677.23 00:28:46.995 [2024-12-16 16:34:35.422899] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:46.995 [2024-12-16 16:34:35.422946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:46.995 [2024-12-16 16:34:35.423274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.423294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ec610 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.423306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec610 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.423503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.423516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2727530 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.423524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727530 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.423537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e1140 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.423549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e0cd0 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.423560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2704de0 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.423569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270c370 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.423892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.423908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddca0 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.423918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddca0 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.424085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.424141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d6630 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.424151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6630 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.424284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.424301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27410b0 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.424309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27410b0 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.424507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.424518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2740e90 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.424526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2740e90 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.424535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec610 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.424545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2727530 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.424554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.424561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.424572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.424581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.424589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.424595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.424602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.424608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.424615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.424622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.424629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.424635] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.424642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.424649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.424655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.424661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.424707] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:46.995 [2024-12-16 16:34:35.424719] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:46.995 [2024-12-16 16:34:35.425042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddca0 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.425058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6630 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.425068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27410b0 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.425081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2740e90 (9): Bad file descriptor 00:28:46.995 [2024-12-16 16:34:35.425089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.425103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.425111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.425118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.425125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.425131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.425138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.425144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.425178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:46.995 [2024-12-16 16:34:35.425189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:46.995 [2024-12-16 16:34:35.425198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:46.995 [2024-12-16 16:34:35.425207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:46.995 [2024-12-16 16:34:35.425234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.425242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.425249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.425255] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.425262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.425268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.425275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.425281] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.425288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.425294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.425300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.425306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.425314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:46.995 [2024-12-16 16:34:35.425320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:46.995 [2024-12-16 16:34:35.425327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:46.995 [2024-12-16 16:34:35.425333] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:46.995 [2024-12-16 16:34:35.425560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.425575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x270c370 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.425583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270c370 is same with the state(6) to be set 00:28:46.995 [2024-12-16 16:34:35.425682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-12-16 16:34:35.425693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2704de0 with addr=10.0.0.2, port=4420 00:28:46.995 [2024-12-16 16:34:35.425701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2704de0 is same with the state(6) to be set 00:28:46.996 [2024-12-16 16:34:35.425778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-12-16 16:34:35.425789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e0cd0 with addr=10.0.0.2, port=4420 00:28:46.996 [2024-12-16 16:34:35.425797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e0cd0 is same with the state(6) to be set 00:28:46.996 [2024-12-16 16:34:35.425893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-12-16 16:34:35.425903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e1140 with addr=10.0.0.2, port=4420 00:28:46.996 [2024-12-16 16:34:35.425911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e1140 is same with the state(6) to be set 00:28:46.996 [2024-12-16 16:34:35.425939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270c370 (9): Bad file descriptor 00:28:46.996 [2024-12-16 16:34:35.425950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2704de0 (9): Bad file descriptor 00:28:46.996 [2024-12-16 16:34:35.425960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e0cd0 (9): Bad file descriptor 00:28:46.996 [2024-12-16 16:34:35.425968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e1140 (9): Bad file descriptor 00:28:46.996 [2024-12-16 16:34:35.425994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:46.996 [2024-12-16 16:34:35.426002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:46.996 [2024-12-16 16:34:35.426009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:46.996 [2024-12-16 16:34:35.426016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:46.996 [2024-12-16 16:34:35.426025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:46.996 [2024-12-16 16:34:35.426031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:46.996 [2024-12-16 16:34:35.426038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:46.996 [2024-12-16 16:34:35.426044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:46.996 [2024-12-16 16:34:35.426051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:46.996 [2024-12-16 16:34:35.426057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:46.996 [2024-12-16 16:34:35.426065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:46.996 [2024-12-16 16:34:35.426071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:46.996 [2024-12-16 16:34:35.426078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:46.996 [2024-12-16 16:34:35.426087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:46.996 [2024-12-16 16:34:35.426101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:46.996 [2024-12-16 16:34:35.426108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:47.256 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1095179 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1095179 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1095179 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:48.191 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.192 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.192 rmmod nvme_tcp 00:28:48.192 rmmod nvme_fabrics 00:28:48.192 rmmod nvme_keyring 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1094914 ']' 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1094914 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1094914 ']' 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1094914 00:28:48.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1094914) - No such process 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1094914 is not found' 00:28:48.451 Process with pid 1094914 is not found 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.451 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.354 00:28:50.354 real 0m7.045s 00:28:50.354 user 0m16.181s 00:28:50.354 sys 0m1.208s 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.354 ************************************ 00:28:50.354 END TEST nvmf_shutdown_tc3 00:28:50.354 ************************************ 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.354 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:50.614 ************************************ 00:28:50.614 START TEST nvmf_shutdown_tc4 00:28:50.614 ************************************ 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:50.614 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.614 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:50.615 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.615 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:50.615 Found net devices under 0000:af:00.0: cvl_0_0 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:50.615 Found net devices under 0000:af:00.1: cvl_0_1 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.615 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.874 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:28:50.875 00:28:50.875 --- 10.0.0.2 ping statistics --- 00:28:50.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.875 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:50.875 00:28:50.875 --- 10.0.0.1 ping statistics --- 00:28:50.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.875 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1096207 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1096207 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1096207 ']' 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.875 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:50.875 [2024-12-16 16:34:39.355432] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:50.875 [2024-12-16 16:34:39.355483] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.875 [2024-12-16 16:34:39.434235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.875 [2024-12-16 16:34:39.456607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.875 [2024-12-16 16:34:39.456646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.875 [2024-12-16 16:34:39.456654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.875 [2024-12-16 16:34:39.456660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.875 [2024-12-16 16:34:39.456665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.875 [2024-12-16 16:34:39.458004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.875 [2024-12-16 16:34:39.458148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.875 [2024-12-16 16:34:39.458234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.875 [2024-12-16 16:34:39.458234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.134 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.135 [2024-12-16 16:34:39.597951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.135 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.135 Malloc1 00:28:51.135 [2024-12-16 16:34:39.716553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.135 Malloc2 00:28:51.395 Malloc3 00:28:51.395 Malloc4 00:28:51.395 Malloc5 00:28:51.395 Malloc6 00:28:51.395 Malloc7 00:28:51.395 Malloc8 00:28:51.654 Malloc9 00:28:51.654 Malloc10 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1096479 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:51.654 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:51.654 [2024-12-16 16:34:40.210820] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:56.934 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1096207 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096207 ']' 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096207 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1096207 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1096207' 00:28:56.935 killing process with pid 1096207 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1096207 00:28:56.935 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1096207 00:28:56.935 [2024-12-16 16:34:45.212900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1850 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.212955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1850 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.212963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1850 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.212970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1850 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.212976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b1850 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 [2024-12-16 16:34:45.214573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b2210 is same with the state(6) to be set 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 [2024-12-16 16:34:45.225275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 [2024-12-16 16:34:45.226112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.935 starting I/O failed: -6 00:28:56.935 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.226831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4c70 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.226859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4c70 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.226867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4c70 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.226874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4c70 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.226883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4c70 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 [2024-12-16 16:34:45.226889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4c70 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.227151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.936 [2024-12-16 16:34:45.227178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5140 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.227202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5140 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.227209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5140 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.227216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5140 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.227223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5140 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.227229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5140 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 [2024-12-16 16:34:45.227559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5610 is same with the state(6) to be set 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.227585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5610 is same with Write completed with error (sct=0, sc=8) 00:28:56.936 the state(6) to be set 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.227594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5610 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.227601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5610 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 [2024-12-16 16:34:45.227608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5610 is same with the state(6) to be set 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 [2024-12-16 16:34:45.227993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with the state(6) to be set 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.228018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with the state(6) to be set 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.228026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.228034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.228041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with Write completed with error (sct=0, sc=8) 00:28:56.936 the state(6) to be set 00:28:56.936 starting I/O failed: -6 00:28:56.936 [2024-12-16 16:34:45.228048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.228055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with the state(6) to be set 00:28:56.936 [2024-12-16 16:34:45.228061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a47a0 is same with Write completed with error (sct=0, sc=8) 00:28:56.936 the state(6) to be set 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.936 starting I/O failed: -6 00:28:56.936 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 [2024-12-16 16:34:45.228913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.937 NVMe io qpair process completion error 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.229701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.229715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.229722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.229731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.229738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 starting I/O failed: -6 00:28:56.937 [2024-12-16 16:34:45.229745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.229751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.229758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.229765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.229771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5fb0 is same with Write completed with error (sct=0, sc=8) 00:28:56.937 the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.229920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with Write completed with error (sct=0, sc=8) 00:28:56.937 the state(6) to be set 00:28:56.937 starting I/O failed: -6 00:28:56.937 [2024-12-16 16:34:45.230180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with starting I/O failed: -6 00:28:56.937 the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 starting I/O failed: -6 00:28:56.937 [2024-12-16 16:34:45.230253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 starting I/O failed: -6 00:28:56.937 [2024-12-16 16:34:45.230267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with Write completed with error (sct=0, sc=8) 00:28:56.937 the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with Write completed with error (sct=0, sc=8) 00:28:56.937 the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6480 is same with Write completed with error (sct=0, sc=8) 00:28:56.937 the state(6) to be set 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(6) to be set 00:28:56.937 starting I/O failed: -6 00:28:56.937 [2024-12-16 16:34:45.230603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(6) to be set 00:28:56.937 [2024-12-16 16:34:45.230623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 [2024-12-16 16:34:45.230629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(6) to be set 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 Write completed with error (sct=0, sc=8) 00:28:56.937 starting I/O failed: -6 00:28:56.938 [2024-12-16 16:34:45.230800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 [2024-12-16 16:34:45.231771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 [2024-12-16 16:34:45.233273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.938 NVMe io qpair process completion error 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 starting I/O failed: -6 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.938 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 [2024-12-16 16:34:45.234248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 [2024-12-16 16:34:45.235145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.939 Write completed with error (sct=0, sc=8) 00:28:56.939 starting I/O failed: -6 00:28:56.940 [2024-12-16 16:34:45.236123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 [2024-12-16 16:34:45.238161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.940 NVMe io qpair process completion error 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 [2024-12-16 16:34:45.239147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 starting I/O failed: -6 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.940 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 [2024-12-16 16:34:45.240050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 [2024-12-16 16:34:45.241078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.941 starting I/O failed: -6 00:28:56.941 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 [2024-12-16 16:34:45.242972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.942 NVMe io qpair process completion error 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 [2024-12-16 16:34:45.243988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 [2024-12-16 16:34:45.244869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 [2024-12-16 16:34:45.245879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.942 starting I/O failed: -6 00:28:56.942 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 [2024-12-16 16:34:45.250874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.943 NVMe io qpair process completion error 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 [2024-12-16 16:34:45.251919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 starting I/O failed: -6 00:28:56.943 [2024-12-16 16:34:45.252793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.943 starting I/O failed: -6 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.943 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 [2024-12-16 16:34:45.253827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.944 Write completed with error (sct=0, sc=8) 00:28:56.944 starting I/O failed: -6 00:28:56.945 [2024-12-16 16:34:45.255845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.945 NVMe io qpair process completion error 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 [2024-12-16 16:34:45.256814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 [2024-12-16 16:34:45.257705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 [2024-12-16 16:34:45.258726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.945 Write completed with error (sct=0, sc=8) 00:28:56.945 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 [2024-12-16 16:34:45.260747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.946 NVMe io qpair process completion error 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 [2024-12-16 16:34:45.261755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.946 starting I/O failed: -6 00:28:56.946 Write completed with error (sct=0, sc=8) 00:28:56.947 [2024-12-16 16:34:45.262621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 [2024-12-16 16:34:45.263646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 [2024-12-16 16:34:45.267913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.947 NVMe io qpair process completion error 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 starting I/O failed: -6 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.947 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 [2024-12-16 16:34:45.268897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 [2024-12-16 16:34:45.269784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 [2024-12-16 16:34:45.270799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.948 Write completed with error (sct=0, sc=8) 00:28:56.948 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 [2024-12-16 16:34:45.274633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.949 NVMe io qpair process completion error 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 [2024-12-16 16:34:45.275594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 starting I/O failed: -6 00:28:56.949 Write completed with error (sct=0, sc=8) 00:28:56.949 [2024-12-16 16:34:45.276413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 [2024-12-16 16:34:45.277979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 Write completed with error (sct=0, sc=8) 00:28:56.950 starting I/O failed: -6 00:28:56.950 [2024-12-16 16:34:45.279827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.950 NVMe io qpair process completion error 00:28:56.950 Initializing NVMe Controllers 00:28:56.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.950 Controller IO queue size 128, less than required. 00:28:56.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:56.950 Controller IO queue size 128, less than required. 00:28:56.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:56.950 Controller IO queue size 128, less than required. 00:28:56.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:56.950 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:56.951 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:56.951 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:56.951 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:56.951 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:56.951 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:56.951 Controller IO queue size 128, less than required. 00:28:56.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:56.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:56.951 Initialization complete. Launching workers. 00:28:56.951 ======================================================== 00:28:56.951 Latency(us) 00:28:56.951 Device Information : IOPS MiB/s Average min max 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2227.32 95.71 57473.19 644.45 100406.23 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2195.79 94.35 58308.27 705.07 117008.82 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2227.11 95.70 57505.34 705.64 98364.50 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2242.13 96.34 57136.62 691.81 100964.29 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2203.19 94.67 58198.66 819.40 95691.58 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2202.35 94.63 58239.91 683.01 109090.44 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2202.13 94.62 58259.37 709.51 111451.83 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2200.44 94.55 58348.44 907.84 116424.94 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2253.14 96.81 57023.22 772.60 98863.27 00:28:56.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2218.85 95.34 57244.26 727.20 95326.46 00:28:56.951 ======================================================== 00:28:56.951 Total : 22172.46 952.72 57769.64 644.45 117008.82 00:28:56.951 00:28:56.951 [2024-12-16 16:34:45.282749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7b30 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2190 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2370 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2550 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4320 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b3cc0 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4650 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.282987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b2880 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.283016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b3ff0 is same with the state(6) to be set 00:28:56.951 [2024-12-16 16:34:45.283044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b1fb0 is same with the state(6) to be set 00:28:56.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:57.210 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1096479 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1096479 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1096479 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.149 rmmod nvme_tcp 00:28:58.149 rmmod nvme_fabrics 00:28:58.149 rmmod nvme_keyring 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1096207 ']' 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1096207 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096207 ']' 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096207 00:28:58.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1096207) - No such process 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1096207 is not found' 00:28:58.149 Process with pid 1096207 is not found 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.149 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.686 00:29:00.686 real 0m9.769s 00:29:00.686 user 0m24.793s 00:29:00.686 sys 0m5.273s 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:00.686 ************************************ 00:29:00.686 END TEST nvmf_shutdown_tc4 00:29:00.686 ************************************ 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:00.686 00:29:00.686 real 0m39.772s 00:29:00.686 user 1m36.234s 00:29:00.686 sys 0m13.895s 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:00.686 ************************************ 00:29:00.686 END TEST nvmf_shutdown 00:29:00.686 ************************************ 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:00.686 ************************************ 00:29:00.686 START TEST nvmf_nsid 00:29:00.686 ************************************ 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:00.686 * Looking for test storage... 00:29:00.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:00.686 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.686 --rc genhtml_branch_coverage=1 00:29:00.686 --rc genhtml_function_coverage=1 00:29:00.686 --rc genhtml_legend=1 00:29:00.686 --rc geninfo_all_blocks=1 00:29:00.686 --rc geninfo_unexecuted_blocks=1 00:29:00.686 00:29:00.686 ' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.686 --rc genhtml_branch_coverage=1 00:29:00.686 --rc genhtml_function_coverage=1 00:29:00.686 --rc genhtml_legend=1 00:29:00.686 --rc geninfo_all_blocks=1 00:29:00.686 --rc geninfo_unexecuted_blocks=1 00:29:00.686 00:29:00.686 ' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.686 --rc genhtml_branch_coverage=1 00:29:00.686 --rc genhtml_function_coverage=1 00:29:00.686 --rc genhtml_legend=1 00:29:00.686 --rc geninfo_all_blocks=1 00:29:00.686 --rc geninfo_unexecuted_blocks=1 00:29:00.686 00:29:00.686 ' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.686 --rc genhtml_branch_coverage=1 00:29:00.686 --rc genhtml_function_coverage=1 00:29:00.686 --rc genhtml_legend=1 00:29:00.686 --rc geninfo_all_blocks=1 00:29:00.686 --rc geninfo_unexecuted_blocks=1 00:29:00.686 00:29:00.686 ' 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.686 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.687 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:07.259 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:07.259 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:07.259 Found net devices under 0000:af:00.0: cvl_0_0 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:07.259 Found net devices under 0000:af:00.1: cvl_0_1 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:29:07.259 00:29:07.259 --- 10.0.0.2 ping statistics --- 00:29:07.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.259 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:29:07.259 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:29:07.260 00:29:07.260 --- 10.0.0.1 ping statistics --- 00:29:07.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.260 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1100847 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1100847 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1100847 ']' 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.260 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:07.260 [2024-12-16 16:34:54.996417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:07.260 [2024-12-16 16:34:54.996465] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.260 [2024-12-16 16:34:55.074478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.260 [2024-12-16 16:34:55.096873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.260 [2024-12-16 16:34:55.096907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.260 [2024-12-16 16:34:55.096915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.260 [2024-12-16 16:34:55.096921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.260 [2024-12-16 16:34:55.096927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.260 [2024-12-16 16:34:55.097432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1100869 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=099821f2-1c51-4580-8447-a18730833743 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=7dabc516-ea17-417b-8bee-10282bc5c5fb 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4b6cfadc-a623-4c9b-aae3-08a44fb8354d 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:07.260 null0 00:29:07.260 null1 00:29:07.260 [2024-12-16 16:34:55.278164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:07.260 [2024-12-16 16:34:55.278206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100869 ] 00:29:07.260 null2 00:29:07.260 [2024-12-16 16:34:55.284717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.260 [2024-12-16 16:34:55.308916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1100869 /var/tmp/tgt2.sock 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1100869 ']' 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:07.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:07.260 [2024-12-16 16:34:55.349545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.260 [2024-12-16 16:34:55.372019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:07.260 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:07.519 [2024-12-16 16:34:55.902140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.519 [2024-12-16 16:34:55.918226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:07.519 nvme0n1 nvme0n2 00:29:07.519 nvme1n1 00:29:07.519 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:07.519 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:07.519 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.455 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:08.455 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:08.455 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:08.455 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:08.455 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:08.455 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:08.456 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 099821f2-1c51-4580-8447-a18730833743 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=099821f21c5145808447a18730833743 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 099821F21C5145808447A18730833743 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 099821F21C5145808447A18730833743 == \0\9\9\8\2\1\F\2\1\C\5\1\4\5\8\0\8\4\4\7\A\1\8\7\3\0\8\3\3\7\4\3 ]] 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 7dabc516-ea17-417b-8bee-10282bc5c5fb 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7dabc516ea17417b8bee10282bc5c5fb 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7DABC516EA17417B8BEE10282BC5C5FB 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 7DABC516EA17417B8BEE10282BC5C5FB == \7\D\A\B\C\5\1\6\E\A\1\7\4\1\7\B\8\B\E\E\1\0\2\8\2\B\C\5\C\5\F\B ]] 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4b6cfadc-a623-4c9b-aae3-08a44fb8354d 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4b6cfadca6234c9baae308a44fb8354d 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4B6CFADCA6234C9BAAE308A44FB8354D 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4B6CFADCA6234C9BAAE308A44FB8354D == \4\B\6\C\F\A\D\C\A\6\2\3\4\C\9\B\A\A\E\3\0\8\A\4\4\F\B\8\3\5\4\D ]] 00:29:09.831 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1100869 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1100869 ']' 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1100869 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100869 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100869' 00:29:10.090 killing process with pid 1100869 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1100869 00:29:10.090 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1100869 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.348 rmmod nvme_tcp 00:29:10.348 rmmod nvme_fabrics 00:29:10.348 rmmod nvme_keyring 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.348 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1100847 ']' 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1100847 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1100847 ']' 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1100847 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100847 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100847' 00:29:10.349 killing process with pid 1100847 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1100847 00:29:10.349 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1100847 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.608 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.147 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.147 00:29:13.147 real 0m12.288s 00:29:13.147 user 0m9.614s 00:29:13.147 sys 0m5.445s 00:29:13.147 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:13.147 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.147 ************************************ 00:29:13.147 END TEST nvmf_nsid 00:29:13.147 ************************************ 00:29:13.147 16:35:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:13.147 00:29:13.147 real 18m31.918s 00:29:13.147 user 49m4.008s 00:29:13.147 sys 4m37.276s 00:29:13.147 16:35:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:13.147 16:35:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:13.147 ************************************ 00:29:13.147 END TEST nvmf_target_extra 00:29:13.147 ************************************ 00:29:13.147 16:35:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:13.147 16:35:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:13.147 16:35:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.147 16:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:13.147 ************************************ 00:29:13.147 START TEST nvmf_host 00:29:13.147 ************************************ 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:13.147 * Looking for test storage... 00:29:13.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.147 --rc genhtml_branch_coverage=1 00:29:13.147 --rc genhtml_function_coverage=1 00:29:13.147 --rc genhtml_legend=1 00:29:13.147 --rc geninfo_all_blocks=1 00:29:13.147 --rc geninfo_unexecuted_blocks=1 00:29:13.147 00:29:13.147 ' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.147 --rc genhtml_branch_coverage=1 00:29:13.147 --rc genhtml_function_coverage=1 00:29:13.147 --rc genhtml_legend=1 00:29:13.147 --rc geninfo_all_blocks=1 00:29:13.147 --rc geninfo_unexecuted_blocks=1 00:29:13.147 00:29:13.147 ' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.147 --rc genhtml_branch_coverage=1 00:29:13.147 --rc genhtml_function_coverage=1 00:29:13.147 --rc genhtml_legend=1 00:29:13.147 --rc geninfo_all_blocks=1 00:29:13.147 --rc geninfo_unexecuted_blocks=1 00:29:13.147 00:29:13.147 ' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.147 --rc genhtml_branch_coverage=1 00:29:13.147 --rc genhtml_function_coverage=1 00:29:13.147 --rc genhtml_legend=1 00:29:13.147 --rc geninfo_all_blocks=1 00:29:13.147 --rc geninfo_unexecuted_blocks=1 00:29:13.147 00:29:13.147 ' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.147 16:35:01 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:13.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.148 ************************************ 00:29:13.148 START TEST nvmf_multicontroller 00:29:13.148 ************************************ 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:13.148 * Looking for test storage... 00:29:13.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.148 --rc genhtml_branch_coverage=1 00:29:13.148 --rc genhtml_function_coverage=1 00:29:13.148 --rc genhtml_legend=1 00:29:13.148 --rc geninfo_all_blocks=1 00:29:13.148 --rc geninfo_unexecuted_blocks=1 00:29:13.148 00:29:13.148 ' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.148 --rc genhtml_branch_coverage=1 00:29:13.148 --rc genhtml_function_coverage=1 00:29:13.148 --rc genhtml_legend=1 00:29:13.148 --rc geninfo_all_blocks=1 00:29:13.148 --rc geninfo_unexecuted_blocks=1 00:29:13.148 00:29:13.148 ' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.148 --rc genhtml_branch_coverage=1 00:29:13.148 --rc genhtml_function_coverage=1 00:29:13.148 --rc genhtml_legend=1 00:29:13.148 --rc geninfo_all_blocks=1 00:29:13.148 --rc geninfo_unexecuted_blocks=1 00:29:13.148 00:29:13.148 ' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.148 --rc genhtml_branch_coverage=1 00:29:13.148 --rc genhtml_function_coverage=1 00:29:13.148 --rc genhtml_legend=1 00:29:13.148 --rc geninfo_all_blocks=1 00:29:13.148 --rc geninfo_unexecuted_blocks=1 00:29:13.148 00:29:13.148 ' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.148 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:13.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.149 16:35:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:19.838 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:19.838 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:19.838 Found net devices under 0000:af:00.0: cvl_0_0 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:19.838 Found net devices under 0000:af:00.1: cvl_0_1 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.838 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:29:19.839 00:29:19.839 --- 10.0.0.2 ping statistics --- 00:29:19.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.839 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:29:19.839 00:29:19.839 --- 10.0.0.1 ping statistics --- 00:29:19.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.839 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1105105 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1105105 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105105 ']' 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 [2024-12-16 16:35:07.656149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:19.839 [2024-12-16 16:35:07.656193] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.839 [2024-12-16 16:35:07.733529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:19.839 [2024-12-16 16:35:07.755186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.839 [2024-12-16 16:35:07.755226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.839 [2024-12-16 16:35:07.755234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.839 [2024-12-16 16:35:07.755240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.839 [2024-12-16 16:35:07.755246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.839 [2024-12-16 16:35:07.756485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.839 [2024-12-16 16:35:07.756591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.839 [2024-12-16 16:35:07.756593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 [2024-12-16 16:35:07.895832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 Malloc0 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 [2024-12-16 16:35:07.963550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 [2024-12-16 16:35:07.975482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 Malloc1 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1105135 00:29:19.839 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1105135 /var/tmp/bdevperf.sock 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105135 ']' 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.840 NVMe0n1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.840 1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.840 request: 00:29:19.840 { 00:29:19.840 "name": "NVMe0", 00:29:19.840 "trtype": "tcp", 00:29:19.840 "traddr": "10.0.0.2", 00:29:19.840 "adrfam": "ipv4", 00:29:19.840 "trsvcid": "4420", 00:29:19.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.840 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:19.840 "hostaddr": "10.0.0.1", 00:29:19.840 "prchk_reftag": false, 00:29:19.840 "prchk_guard": false, 00:29:19.840 "hdgst": false, 00:29:19.840 "ddgst": false, 00:29:19.840 "allow_unrecognized_csi": false, 00:29:19.840 "method": "bdev_nvme_attach_controller", 00:29:19.840 "req_id": 1 00:29:19.840 } 00:29:19.840 Got JSON-RPC error response 00:29:19.840 response: 00:29:19.840 { 00:29:19.840 "code": -114, 00:29:19.840 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:19.840 } 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.840 request: 00:29:19.840 { 00:29:19.840 "name": "NVMe0", 00:29:19.840 "trtype": "tcp", 00:29:19.840 "traddr": "10.0.0.2", 00:29:19.840 "adrfam": "ipv4", 00:29:19.840 "trsvcid": "4420", 00:29:19.840 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:19.840 "hostaddr": "10.0.0.1", 00:29:19.840 "prchk_reftag": false, 00:29:19.840 "prchk_guard": false, 00:29:19.840 "hdgst": false, 00:29:19.840 "ddgst": false, 00:29:19.840 "allow_unrecognized_csi": false, 00:29:19.840 "method": "bdev_nvme_attach_controller", 00:29:19.840 "req_id": 1 00:29:19.840 } 00:29:19.840 Got JSON-RPC error response 00:29:19.840 response: 00:29:19.840 { 00:29:19.840 "code": -114, 00:29:19.840 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:19.840 } 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.840 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.840 request: 00:29:19.840 { 00:29:19.840 "name": "NVMe0", 00:29:19.840 "trtype": "tcp", 00:29:19.840 "traddr": "10.0.0.2", 00:29:19.840 "adrfam": "ipv4", 00:29:19.840 "trsvcid": "4420", 00:29:19.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.840 "hostaddr": "10.0.0.1", 00:29:19.840 "prchk_reftag": false, 00:29:19.840 "prchk_guard": false, 00:29:19.840 "hdgst": false, 00:29:19.840 "ddgst": false, 00:29:19.840 "multipath": "disable", 00:29:19.840 "allow_unrecognized_csi": false, 00:29:19.840 "method": "bdev_nvme_attach_controller", 00:29:20.099 "req_id": 1 00:29:20.099 } 00:29:20.099 Got JSON-RPC error response 00:29:20.099 response: 00:29:20.099 { 00:29:20.099 "code": -114, 00:29:20.099 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:20.099 } 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.099 request: 00:29:20.099 { 00:29:20.099 "name": "NVMe0", 00:29:20.099 "trtype": "tcp", 00:29:20.099 "traddr": "10.0.0.2", 00:29:20.099 "adrfam": "ipv4", 00:29:20.099 "trsvcid": "4420", 00:29:20.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.099 "hostaddr": "10.0.0.1", 00:29:20.099 "prchk_reftag": false, 00:29:20.099 "prchk_guard": false, 00:29:20.099 "hdgst": false, 00:29:20.099 "ddgst": false, 00:29:20.099 "multipath": "failover", 00:29:20.099 "allow_unrecognized_csi": false, 00:29:20.099 "method": "bdev_nvme_attach_controller", 00:29:20.099 "req_id": 1 00:29:20.099 } 00:29:20.099 Got JSON-RPC error response 00:29:20.099 response: 00:29:20.099 { 00:29:20.099 "code": -114, 00:29:20.099 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:20.099 } 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.099 NVMe0n1 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.099 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.358 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:20.358 16:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:21.733 { 00:29:21.733 "results": [ 00:29:21.733 { 00:29:21.733 "job": "NVMe0n1", 00:29:21.733 "core_mask": "0x1", 00:29:21.733 "workload": "write", 00:29:21.733 "status": "finished", 00:29:21.733 "queue_depth": 128, 00:29:21.733 "io_size": 4096, 00:29:21.733 "runtime": 1.007766, 00:29:21.733 "iops": 25461.267794309395, 00:29:21.733 "mibps": 99.45807732152107, 00:29:21.733 "io_failed": 0, 00:29:21.733 "io_timeout": 0, 00:29:21.733 "avg_latency_us": 5021.169259686103, 00:29:21.733 "min_latency_us": 1451.1542857142856, 00:29:21.733 "max_latency_us": 9924.022857142858 00:29:21.733 } 00:29:21.733 ], 00:29:21.733 "core_count": 1 00:29:21.733 } 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1105135 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105135 ']' 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105135 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:21.733 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.734 16:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105135 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105135' 00:29:21.734 killing process with pid 1105135 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105135 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105135 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:21.734 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:21.734 [2024-12-16 16:35:08.076989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:21.734 [2024-12-16 16:35:08.077033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105135 ] 00:29:21.734 [2024-12-16 16:35:08.149816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.734 [2024-12-16 16:35:08.172772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.734 [2024-12-16 16:35:08.822277] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 0c923dca-c9a2-45e0-b196-694df73681f2 already exists 00:29:21.734 [2024-12-16 16:35:08.822304] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:0c923dca-c9a2-45e0-b196-694df73681f2 alias for bdev NVMe1n1 00:29:21.734 [2024-12-16 16:35:08.822312] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:21.734 Running I/O for 1 seconds... 00:29:21.734 25404.00 IOPS, 99.23 MiB/s 00:29:21.734 Latency(us) 00:29:21.734 [2024-12-16T15:35:10.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.734 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:21.734 NVMe0n1 : 1.01 25461.27 99.46 0.00 0.00 5021.17 1451.15 9924.02 00:29:21.734 [2024-12-16T15:35:10.343Z] =================================================================================================================== 00:29:21.734 [2024-12-16T15:35:10.343Z] Total : 25461.27 99.46 0.00 0.00 5021.17 1451.15 9924.02 00:29:21.734 Received shutdown signal, test time was about 1.000000 seconds 00:29:21.734 00:29:21.734 Latency(us) 00:29:21.734 [2024-12-16T15:35:10.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.734 [2024-12-16T15:35:10.343Z] =================================================================================================================== 00:29:21.734 [2024-12-16T15:35:10.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.734 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.734 rmmod nvme_tcp 00:29:21.734 rmmod nvme_fabrics 00:29:21.734 rmmod nvme_keyring 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1105105 ']' 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1105105 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105105 ']' 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105105 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.734 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105105 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105105' 00:29:21.993 killing process with pid 1105105 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105105 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105105 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.993 16:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.526 00:29:24.526 real 0m11.110s 00:29:24.526 user 0m12.281s 00:29:24.526 sys 0m5.199s 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.526 ************************************ 00:29:24.526 END TEST nvmf_multicontroller 00:29:24.526 ************************************ 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.526 ************************************ 00:29:24.526 START TEST nvmf_aer 00:29:24.526 ************************************ 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:24.526 * Looking for test storage... 00:29:24.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:24.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.526 --rc genhtml_branch_coverage=1 00:29:24.526 --rc genhtml_function_coverage=1 00:29:24.526 --rc genhtml_legend=1 00:29:24.526 --rc geninfo_all_blocks=1 00:29:24.526 --rc geninfo_unexecuted_blocks=1 00:29:24.526 00:29:24.526 ' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:24.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.526 --rc genhtml_branch_coverage=1 00:29:24.526 --rc genhtml_function_coverage=1 00:29:24.526 --rc genhtml_legend=1 00:29:24.526 --rc geninfo_all_blocks=1 00:29:24.526 --rc geninfo_unexecuted_blocks=1 00:29:24.526 00:29:24.526 ' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:24.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.526 --rc genhtml_branch_coverage=1 00:29:24.526 --rc genhtml_function_coverage=1 00:29:24.526 --rc genhtml_legend=1 00:29:24.526 --rc geninfo_all_blocks=1 00:29:24.526 --rc geninfo_unexecuted_blocks=1 00:29:24.526 00:29:24.526 ' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:24.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.526 --rc genhtml_branch_coverage=1 00:29:24.526 --rc genhtml_function_coverage=1 00:29:24.526 --rc genhtml_legend=1 00:29:24.526 --rc geninfo_all_blocks=1 00:29:24.526 --rc geninfo_unexecuted_blocks=1 00:29:24.526 00:29:24.526 ' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.526 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.527 16:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.094 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.094 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.094 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.094 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:31.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:31.095 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:31.095 Found net devices under 0000:af:00.0: cvl_0_0 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:31.095 Found net devices under 0000:af:00.1: cvl_0_1 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:29:31.095 00:29:31.095 --- 10.0.0.2 ping statistics --- 00:29:31.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.095 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:31.095 00:29:31.095 --- 10.0.0.1 ping statistics --- 00:29:31.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.095 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.095 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1109053 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1109053 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1109053 ']' 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.096 16:35:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 [2024-12-16 16:35:18.885178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:31.096 [2024-12-16 16:35:18.885222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.096 [2024-12-16 16:35:18.963490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.096 [2024-12-16 16:35:18.987224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.096 [2024-12-16 16:35:18.987261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.096 [2024-12-16 16:35:18.987268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.096 [2024-12-16 16:35:18.987274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.096 [2024-12-16 16:35:18.987279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.096 [2024-12-16 16:35:18.988728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.096 [2024-12-16 16:35:18.988837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.096 [2024-12-16 16:35:18.988945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.096 [2024-12-16 16:35:18.988947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 [2024-12-16 16:35:19.121704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 Malloc0 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 [2024-12-16 16:35:19.185870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.096 [ 00:29:31.096 { 00:29:31.096 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:31.096 "subtype": "Discovery", 00:29:31.096 "listen_addresses": [], 00:29:31.096 "allow_any_host": true, 00:29:31.096 "hosts": [] 00:29:31.096 }, 00:29:31.096 { 00:29:31.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:31.096 "subtype": "NVMe", 00:29:31.096 "listen_addresses": [ 00:29:31.096 { 00:29:31.096 "trtype": "TCP", 00:29:31.096 "adrfam": "IPv4", 00:29:31.096 "traddr": "10.0.0.2", 00:29:31.096 "trsvcid": "4420" 00:29:31.096 } 00:29:31.096 ], 00:29:31.096 "allow_any_host": true, 00:29:31.096 "hosts": [], 00:29:31.096 "serial_number": "SPDK00000000000001", 00:29:31.096 "model_number": "SPDK bdev Controller", 00:29:31.096 "max_namespaces": 2, 00:29:31.096 "min_cntlid": 1, 00:29:31.096 "max_cntlid": 65519, 00:29:31.096 "namespaces": [ 00:29:31.096 { 00:29:31.096 "nsid": 1, 00:29:31.096 "bdev_name": "Malloc0", 00:29:31.096 "name": "Malloc0", 00:29:31.096 "nguid": "C7D622C58C18437392FAB07F46D9B01D", 00:29:31.096 "uuid": "c7d622c5-8c18-4373-92fa-b07f46d9b01d" 00:29:31.096 } 00:29:31.096 ] 00:29:31.096 } 00:29:31.096 ] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1109083 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:31.096 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.097 Malloc1 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.097 Asynchronous Event Request test 00:29:31.097 Attaching to 10.0.0.2 00:29:31.097 Attached to 10.0.0.2 00:29:31.097 Registering asynchronous event callbacks... 00:29:31.097 Starting namespace attribute notice tests for all controllers... 00:29:31.097 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:31.097 aer_cb - Changed Namespace 00:29:31.097 Cleaning up... 00:29:31.097 [ 00:29:31.097 { 00:29:31.097 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:31.097 "subtype": "Discovery", 00:29:31.097 "listen_addresses": [], 00:29:31.097 "allow_any_host": true, 00:29:31.097 "hosts": [] 00:29:31.097 }, 00:29:31.097 { 00:29:31.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:31.097 "subtype": "NVMe", 00:29:31.097 "listen_addresses": [ 00:29:31.097 { 00:29:31.097 "trtype": "TCP", 00:29:31.097 "adrfam": "IPv4", 00:29:31.097 "traddr": "10.0.0.2", 00:29:31.097 "trsvcid": "4420" 00:29:31.097 } 00:29:31.097 ], 00:29:31.097 "allow_any_host": true, 00:29:31.097 "hosts": [], 00:29:31.097 "serial_number": "SPDK00000000000001", 00:29:31.097 "model_number": "SPDK bdev Controller", 00:29:31.097 "max_namespaces": 2, 00:29:31.097 "min_cntlid": 1, 00:29:31.097 "max_cntlid": 65519, 00:29:31.097 "namespaces": [ 00:29:31.097 { 00:29:31.097 "nsid": 1, 00:29:31.097 "bdev_name": "Malloc0", 00:29:31.097 "name": "Malloc0", 00:29:31.097 "nguid": "C7D622C58C18437392FAB07F46D9B01D", 00:29:31.097 "uuid": "c7d622c5-8c18-4373-92fa-b07f46d9b01d" 00:29:31.097 }, 00:29:31.097 { 00:29:31.097 "nsid": 2, 00:29:31.097 "bdev_name": "Malloc1", 00:29:31.097 "name": "Malloc1", 00:29:31.097 "nguid": "D06E65E7417648CC930F9616B878F564", 00:29:31.097 "uuid": "d06e65e7-4176-48cc-930f-9616b878f564" 00:29:31.097 } 00:29:31.097 ] 00:29:31.097 } 00:29:31.097 ] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1109083 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.097 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.097 rmmod nvme_tcp 00:29:31.097 rmmod nvme_fabrics 00:29:31.097 rmmod nvme_keyring 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1109053 ']' 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1109053 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1109053 ']' 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1109053 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109053 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109053' 00:29:31.357 killing process with pid 1109053 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1109053 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1109053 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.357 16:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.895 00:29:33.895 real 0m9.313s 00:29:33.895 user 0m5.468s 00:29:33.895 sys 0m4.866s 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.895 ************************************ 00:29:33.895 END TEST nvmf_aer 00:29:33.895 ************************************ 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.895 ************************************ 00:29:33.895 START TEST nvmf_async_init 00:29:33.895 ************************************ 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:33.895 * Looking for test storage... 00:29:33.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:33.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.895 --rc genhtml_branch_coverage=1 00:29:33.895 --rc genhtml_function_coverage=1 00:29:33.895 --rc genhtml_legend=1 00:29:33.895 --rc geninfo_all_blocks=1 00:29:33.895 --rc geninfo_unexecuted_blocks=1 00:29:33.895 00:29:33.895 ' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:33.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.895 --rc genhtml_branch_coverage=1 00:29:33.895 --rc genhtml_function_coverage=1 00:29:33.895 --rc genhtml_legend=1 00:29:33.895 --rc geninfo_all_blocks=1 00:29:33.895 --rc geninfo_unexecuted_blocks=1 00:29:33.895 00:29:33.895 ' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:33.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.895 --rc genhtml_branch_coverage=1 00:29:33.895 --rc genhtml_function_coverage=1 00:29:33.895 --rc genhtml_legend=1 00:29:33.895 --rc geninfo_all_blocks=1 00:29:33.895 --rc geninfo_unexecuted_blocks=1 00:29:33.895 00:29:33.895 ' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:33.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.895 --rc genhtml_branch_coverage=1 00:29:33.895 --rc genhtml_function_coverage=1 00:29:33.895 --rc genhtml_legend=1 00:29:33.895 --rc geninfo_all_blocks=1 00:29:33.895 --rc geninfo_unexecuted_blocks=1 00:29:33.895 00:29:33.895 ' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.895 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:33.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b023181a4d3c4e4ca65241a1d2fc54e1 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.896 16:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.469 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:40.470 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:40.470 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:40.470 Found net devices under 0000:af:00.0: cvl_0_0 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:40.470 Found net devices under 0000:af:00.1: cvl_0_1 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.470 16:35:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.470 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.470 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.470 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.470 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:29:40.470 00:29:40.470 --- 10.0.0.2 ping statistics --- 00:29:40.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.470 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:29:40.470 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:40.470 00:29:40.470 --- 10.0.0.1 ping statistics --- 00:29:40.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.471 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1112549 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1112549 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1112549 ']' 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 [2024-12-16 16:35:28.201774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:40.471 [2024-12-16 16:35:28.201823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.471 [2024-12-16 16:35:28.278442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.471 [2024-12-16 16:35:28.300033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.471 [2024-12-16 16:35:28.300064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.471 [2024-12-16 16:35:28.300072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.471 [2024-12-16 16:35:28.300080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.471 [2024-12-16 16:35:28.300085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.471 [2024-12-16 16:35:28.300610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 [2024-12-16 16:35:28.440015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 null0 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b023181a4d3c4e4ca65241a1d2fc54e1 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 [2024-12-16 16:35:28.492290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 nvme0n1 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.471 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.471 [ 00:29:40.471 { 00:29:40.471 "name": "nvme0n1", 00:29:40.471 "aliases": [ 00:29:40.471 "b023181a-4d3c-4e4c-a652-41a1d2fc54e1" 00:29:40.471 ], 00:29:40.471 "product_name": "NVMe disk", 00:29:40.471 "block_size": 512, 00:29:40.471 "num_blocks": 2097152, 00:29:40.471 "uuid": "b023181a-4d3c-4e4c-a652-41a1d2fc54e1", 00:29:40.471 "numa_id": 1, 00:29:40.471 "assigned_rate_limits": { 00:29:40.471 "rw_ios_per_sec": 0, 00:29:40.471 "rw_mbytes_per_sec": 0, 00:29:40.471 "r_mbytes_per_sec": 0, 00:29:40.471 "w_mbytes_per_sec": 0 00:29:40.471 }, 00:29:40.471 "claimed": false, 00:29:40.471 "zoned": false, 00:29:40.471 "supported_io_types": { 00:29:40.471 "read": true, 00:29:40.471 "write": true, 00:29:40.471 "unmap": false, 00:29:40.471 "flush": true, 00:29:40.471 "reset": true, 00:29:40.471 "nvme_admin": true, 00:29:40.471 "nvme_io": true, 00:29:40.471 "nvme_io_md": false, 00:29:40.471 "write_zeroes": true, 00:29:40.471 "zcopy": false, 00:29:40.471 "get_zone_info": false, 00:29:40.471 "zone_management": false, 00:29:40.471 "zone_append": false, 00:29:40.471 "compare": true, 00:29:40.471 "compare_and_write": true, 00:29:40.471 "abort": true, 00:29:40.471 "seek_hole": false, 00:29:40.471 "seek_data": false, 00:29:40.471 "copy": true, 00:29:40.471 "nvme_iov_md": false 00:29:40.471 }, 00:29:40.471 "memory_domains": [ 00:29:40.471 { 00:29:40.471 "dma_device_id": "system", 00:29:40.471 "dma_device_type": 1 00:29:40.471 } 00:29:40.471 ], 00:29:40.471 "driver_specific": { 00:29:40.471 "nvme": [ 00:29:40.471 { 00:29:40.471 "trid": { 00:29:40.471 "trtype": "TCP", 00:29:40.471 "adrfam": "IPv4", 00:29:40.471 "traddr": "10.0.0.2", 00:29:40.471 "trsvcid": "4420", 00:29:40.471 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:40.471 }, 00:29:40.471 "ctrlr_data": { 00:29:40.471 "cntlid": 1, 00:29:40.472 "vendor_id": "0x8086", 00:29:40.472 "model_number": "SPDK bdev Controller", 00:29:40.472 "serial_number": "00000000000000000000", 00:29:40.472 "firmware_revision": "25.01", 00:29:40.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.472 "oacs": { 00:29:40.472 "security": 0, 00:29:40.472 "format": 0, 00:29:40.472 "firmware": 0, 00:29:40.472 "ns_manage": 0 00:29:40.472 }, 00:29:40.472 "multi_ctrlr": true, 00:29:40.472 "ana_reporting": false 00:29:40.472 }, 00:29:40.472 "vs": { 00:29:40.472 "nvme_version": "1.3" 00:29:40.472 }, 00:29:40.472 "ns_data": { 00:29:40.472 "id": 1, 00:29:40.472 "can_share": true 00:29:40.472 } 00:29:40.472 } 00:29:40.472 ], 00:29:40.472 "mp_policy": "active_passive" 00:29:40.472 } 00:29:40.472 } 00:29:40.472 ] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 [2024-12-16 16:35:28.756806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:40.472 [2024-12-16 16:35:28.756863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfaa90 (9): Bad file descriptor 00:29:40.472 [2024-12-16 16:35:28.889180] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 [ 00:29:40.472 { 00:29:40.472 "name": "nvme0n1", 00:29:40.472 "aliases": [ 00:29:40.472 "b023181a-4d3c-4e4c-a652-41a1d2fc54e1" 00:29:40.472 ], 00:29:40.472 "product_name": "NVMe disk", 00:29:40.472 "block_size": 512, 00:29:40.472 "num_blocks": 2097152, 00:29:40.472 "uuid": "b023181a-4d3c-4e4c-a652-41a1d2fc54e1", 00:29:40.472 "numa_id": 1, 00:29:40.472 "assigned_rate_limits": { 00:29:40.472 "rw_ios_per_sec": 0, 00:29:40.472 "rw_mbytes_per_sec": 0, 00:29:40.472 "r_mbytes_per_sec": 0, 00:29:40.472 "w_mbytes_per_sec": 0 00:29:40.472 }, 00:29:40.472 "claimed": false, 00:29:40.472 "zoned": false, 00:29:40.472 "supported_io_types": { 00:29:40.472 "read": true, 00:29:40.472 "write": true, 00:29:40.472 "unmap": false, 00:29:40.472 "flush": true, 00:29:40.472 "reset": true, 00:29:40.472 "nvme_admin": true, 00:29:40.472 "nvme_io": true, 00:29:40.472 "nvme_io_md": false, 00:29:40.472 "write_zeroes": true, 00:29:40.472 "zcopy": false, 00:29:40.472 "get_zone_info": false, 00:29:40.472 "zone_management": false, 00:29:40.472 "zone_append": false, 00:29:40.472 "compare": true, 00:29:40.472 "compare_and_write": true, 00:29:40.472 "abort": true, 00:29:40.472 "seek_hole": false, 00:29:40.472 "seek_data": false, 00:29:40.472 "copy": true, 00:29:40.472 "nvme_iov_md": false 00:29:40.472 }, 00:29:40.472 "memory_domains": [ 00:29:40.472 { 00:29:40.472 "dma_device_id": "system", 00:29:40.472 "dma_device_type": 1 00:29:40.472 } 00:29:40.472 ], 00:29:40.472 "driver_specific": { 00:29:40.472 "nvme": [ 00:29:40.472 { 00:29:40.472 "trid": { 00:29:40.472 "trtype": "TCP", 00:29:40.472 "adrfam": "IPv4", 00:29:40.472 "traddr": "10.0.0.2", 00:29:40.472 "trsvcid": "4420", 00:29:40.472 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:40.472 }, 00:29:40.472 "ctrlr_data": { 00:29:40.472 "cntlid": 2, 00:29:40.472 "vendor_id": "0x8086", 00:29:40.472 "model_number": "SPDK bdev Controller", 00:29:40.472 "serial_number": "00000000000000000000", 00:29:40.472 "firmware_revision": "25.01", 00:29:40.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.472 "oacs": { 00:29:40.472 "security": 0, 00:29:40.472 "format": 0, 00:29:40.472 "firmware": 0, 00:29:40.472 "ns_manage": 0 00:29:40.472 }, 00:29:40.472 "multi_ctrlr": true, 00:29:40.472 "ana_reporting": false 00:29:40.472 }, 00:29:40.472 "vs": { 00:29:40.472 "nvme_version": "1.3" 00:29:40.472 }, 00:29:40.472 "ns_data": { 00:29:40.472 "id": 1, 00:29:40.472 "can_share": true 00:29:40.472 } 00:29:40.472 } 00:29:40.472 ], 00:29:40.472 "mp_policy": "active_passive" 00:29:40.472 } 00:29:40.472 } 00:29:40.472 ] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Vo16dYu95P 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Vo16dYu95P 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Vo16dYu95P 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 [2024-12-16 16:35:28.965424] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:40.472 [2024-12-16 16:35:28.965511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 [2024-12-16 16:35:28.985489] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:40.472 nvme0n1 00:29:40.472 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.472 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:40.472 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.472 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.472 [ 00:29:40.472 { 00:29:40.472 "name": "nvme0n1", 00:29:40.472 "aliases": [ 00:29:40.472 "b023181a-4d3c-4e4c-a652-41a1d2fc54e1" 00:29:40.472 ], 00:29:40.472 "product_name": "NVMe disk", 00:29:40.472 "block_size": 512, 00:29:40.473 "num_blocks": 2097152, 00:29:40.473 "uuid": "b023181a-4d3c-4e4c-a652-41a1d2fc54e1", 00:29:40.473 "numa_id": 1, 00:29:40.473 "assigned_rate_limits": { 00:29:40.473 "rw_ios_per_sec": 0, 00:29:40.473 "rw_mbytes_per_sec": 0, 00:29:40.473 "r_mbytes_per_sec": 0, 00:29:40.473 "w_mbytes_per_sec": 0 00:29:40.473 }, 00:29:40.473 "claimed": false, 00:29:40.473 "zoned": false, 00:29:40.473 "supported_io_types": { 00:29:40.473 "read": true, 00:29:40.473 "write": true, 00:29:40.473 "unmap": false, 00:29:40.473 "flush": true, 00:29:40.473 "reset": true, 00:29:40.473 "nvme_admin": true, 00:29:40.473 "nvme_io": true, 00:29:40.473 "nvme_io_md": false, 00:29:40.473 "write_zeroes": true, 00:29:40.473 "zcopy": false, 00:29:40.473 "get_zone_info": false, 00:29:40.473 "zone_management": false, 00:29:40.473 "zone_append": false, 00:29:40.473 "compare": true, 00:29:40.473 "compare_and_write": true, 00:29:40.473 "abort": true, 00:29:40.473 "seek_hole": false, 00:29:40.473 "seek_data": false, 00:29:40.473 "copy": true, 00:29:40.473 "nvme_iov_md": false 00:29:40.473 }, 00:29:40.473 "memory_domains": [ 00:29:40.473 { 00:29:40.473 "dma_device_id": "system", 00:29:40.473 "dma_device_type": 1 00:29:40.473 } 00:29:40.473 ], 00:29:40.473 "driver_specific": { 00:29:40.473 "nvme": [ 00:29:40.473 { 00:29:40.473 "trid": { 00:29:40.473 "trtype": "TCP", 00:29:40.473 "adrfam": "IPv4", 00:29:40.473 "traddr": "10.0.0.2", 00:29:40.473 "trsvcid": "4421", 00:29:40.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:40.473 }, 00:29:40.473 "ctrlr_data": { 00:29:40.473 "cntlid": 3, 00:29:40.473 "vendor_id": "0x8086", 00:29:40.473 "model_number": "SPDK bdev Controller", 00:29:40.473 "serial_number": "00000000000000000000", 00:29:40.473 "firmware_revision": "25.01", 00:29:40.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.473 "oacs": { 00:29:40.473 "security": 0, 00:29:40.473 "format": 0, 00:29:40.473 "firmware": 0, 00:29:40.473 "ns_manage": 0 00:29:40.473 }, 00:29:40.473 "multi_ctrlr": true, 00:29:40.473 "ana_reporting": false 00:29:40.473 }, 00:29:40.473 "vs": { 00:29:40.473 "nvme_version": "1.3" 00:29:40.473 }, 00:29:40.473 "ns_data": { 00:29:40.473 "id": 1, 00:29:40.473 "can_share": true 00:29:40.473 } 00:29:40.473 } 00:29:40.473 ], 00:29:40.473 "mp_policy": "active_passive" 00:29:40.473 } 00:29:40.473 } 00:29:40.473 ] 00:29:40.473 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.473 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.473 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.473 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Vo16dYu95P 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.733 rmmod nvme_tcp 00:29:40.733 rmmod nvme_fabrics 00:29:40.733 rmmod nvme_keyring 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1112549 ']' 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1112549 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1112549 ']' 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1112549 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1112549 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1112549' 00:29:40.733 killing process with pid 1112549 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1112549 00:29:40.733 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1112549 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.993 16:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.900 00:29:42.900 real 0m9.355s 00:29:42.900 user 0m3.050s 00:29:42.900 sys 0m4.726s 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.900 ************************************ 00:29:42.900 END TEST nvmf_async_init 00:29:42.900 ************************************ 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.900 16:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.158 ************************************ 00:29:43.158 START TEST dma 00:29:43.158 ************************************ 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:43.158 * Looking for test storage... 00:29:43.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:43.158 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:43.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.159 --rc genhtml_branch_coverage=1 00:29:43.159 --rc genhtml_function_coverage=1 00:29:43.159 --rc genhtml_legend=1 00:29:43.159 --rc geninfo_all_blocks=1 00:29:43.159 --rc geninfo_unexecuted_blocks=1 00:29:43.159 00:29:43.159 ' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:43.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.159 --rc genhtml_branch_coverage=1 00:29:43.159 --rc genhtml_function_coverage=1 00:29:43.159 --rc genhtml_legend=1 00:29:43.159 --rc geninfo_all_blocks=1 00:29:43.159 --rc geninfo_unexecuted_blocks=1 00:29:43.159 00:29:43.159 ' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:43.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.159 --rc genhtml_branch_coverage=1 00:29:43.159 --rc genhtml_function_coverage=1 00:29:43.159 --rc genhtml_legend=1 00:29:43.159 --rc geninfo_all_blocks=1 00:29:43.159 --rc geninfo_unexecuted_blocks=1 00:29:43.159 00:29:43.159 ' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:43.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.159 --rc genhtml_branch_coverage=1 00:29:43.159 --rc genhtml_function_coverage=1 00:29:43.159 --rc genhtml_legend=1 00:29:43.159 --rc geninfo_all_blocks=1 00:29:43.159 --rc geninfo_unexecuted_blocks=1 00:29:43.159 00:29:43.159 ' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:43.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:43.159 00:29:43.159 real 0m0.208s 00:29:43.159 user 0m0.138s 00:29:43.159 sys 0m0.083s 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:43.159 ************************************ 00:29:43.159 END TEST dma 00:29:43.159 ************************************ 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.159 16:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.419 ************************************ 00:29:43.419 START TEST nvmf_identify 00:29:43.419 ************************************ 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:43.419 * Looking for test storage... 00:29:43.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:43.419 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:43.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.420 --rc genhtml_branch_coverage=1 00:29:43.420 --rc genhtml_function_coverage=1 00:29:43.420 --rc genhtml_legend=1 00:29:43.420 --rc geninfo_all_blocks=1 00:29:43.420 --rc geninfo_unexecuted_blocks=1 00:29:43.420 00:29:43.420 ' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:43.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.420 --rc genhtml_branch_coverage=1 00:29:43.420 --rc genhtml_function_coverage=1 00:29:43.420 --rc genhtml_legend=1 00:29:43.420 --rc geninfo_all_blocks=1 00:29:43.420 --rc geninfo_unexecuted_blocks=1 00:29:43.420 00:29:43.420 ' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:43.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.420 --rc genhtml_branch_coverage=1 00:29:43.420 --rc genhtml_function_coverage=1 00:29:43.420 --rc genhtml_legend=1 00:29:43.420 --rc geninfo_all_blocks=1 00:29:43.420 --rc geninfo_unexecuted_blocks=1 00:29:43.420 00:29:43.420 ' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:43.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.420 --rc genhtml_branch_coverage=1 00:29:43.420 --rc genhtml_function_coverage=1 00:29:43.420 --rc genhtml_legend=1 00:29:43.420 --rc geninfo_all_blocks=1 00:29:43.420 --rc geninfo_unexecuted_blocks=1 00:29:43.420 00:29:43.420 ' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:43.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.420 16:35:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.420 16:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.420 16:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.420 16:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.420 16:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:49.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:49.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:49.997 Found net devices under 0000:af:00.0: cvl_0_0 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:49.997 Found net devices under 0000:af:00.1: cvl_0_1 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:49.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:29:49.997 00:29:49.997 --- 10.0.0.2 ping statistics --- 00:29:49.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.997 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:29:49.997 00:29:49.997 --- 10.0.0.1 ping statistics --- 00:29:49.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.997 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:49.997 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1116305 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1116305 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1116305 ']' 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.998 16:35:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 [2024-12-16 16:35:37.872761] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:49.998 [2024-12-16 16:35:37.872804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.998 [2024-12-16 16:35:37.936773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.998 [2024-12-16 16:35:37.960991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.998 [2024-12-16 16:35:37.961028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.998 [2024-12-16 16:35:37.961035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.998 [2024-12-16 16:35:37.961042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.998 [2024-12-16 16:35:37.961049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.998 [2024-12-16 16:35:37.966132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.998 [2024-12-16 16:35:37.966165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.998 [2024-12-16 16:35:37.966270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.998 [2024-12-16 16:35:37.966271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 [2024-12-16 16:35:38.057764] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 Malloc0 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 [2024-12-16 16:35:38.148426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.998 [ 00:29:49.998 { 00:29:49.998 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:49.998 "subtype": "Discovery", 00:29:49.998 "listen_addresses": [ 00:29:49.998 { 00:29:49.998 "trtype": "TCP", 00:29:49.998 "adrfam": "IPv4", 00:29:49.998 "traddr": "10.0.0.2", 00:29:49.998 "trsvcid": "4420" 00:29:49.998 } 00:29:49.998 ], 00:29:49.998 "allow_any_host": true, 00:29:49.998 "hosts": [] 00:29:49.998 }, 00:29:49.998 { 00:29:49.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.998 "subtype": "NVMe", 00:29:49.998 "listen_addresses": [ 00:29:49.998 { 00:29:49.998 "trtype": "TCP", 00:29:49.998 "adrfam": "IPv4", 00:29:49.998 "traddr": "10.0.0.2", 00:29:49.998 "trsvcid": "4420" 00:29:49.998 } 00:29:49.998 ], 00:29:49.998 "allow_any_host": true, 00:29:49.998 "hosts": [], 00:29:49.998 "serial_number": "SPDK00000000000001", 00:29:49.998 "model_number": "SPDK bdev Controller", 00:29:49.998 "max_namespaces": 32, 00:29:49.998 "min_cntlid": 1, 00:29:49.998 "max_cntlid": 65519, 00:29:49.998 "namespaces": [ 00:29:49.998 { 00:29:49.998 "nsid": 1, 00:29:49.998 "bdev_name": "Malloc0", 00:29:49.998 "name": "Malloc0", 00:29:49.998 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:49.998 "eui64": "ABCDEF0123456789", 00:29:49.998 "uuid": "fbdf6485-e306-4403-870f-8291a578ef26" 00:29:49.998 } 00:29:49.998 ] 00:29:49.998 } 00:29:49.998 ] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.998 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:49.998 [2024-12-16 16:35:38.199316] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:49.998 [2024-12-16 16:35:38.199350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116330 ] 00:29:49.998 [2024-12-16 16:35:38.240591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:49.998 [2024-12-16 16:35:38.240636] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:49.998 [2024-12-16 16:35:38.240641] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:49.998 [2024-12-16 16:35:38.240651] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:49.998 [2024-12-16 16:35:38.240658] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:49.998 [2024-12-16 16:35:38.241207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:49.998 [2024-12-16 16:35:38.241241] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x182ded0 0 00:29:49.998 [2024-12-16 16:35:38.251301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:49.998 [2024-12-16 16:35:38.251316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:49.998 [2024-12-16 16:35:38.251321] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:49.998 [2024-12-16 16:35:38.251324] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:49.998 [2024-12-16 16:35:38.251355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.998 [2024-12-16 16:35:38.251361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.998 [2024-12-16 16:35:38.251364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.998 [2024-12-16 16:35:38.251377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:49.998 [2024-12-16 16:35:38.251395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.998 [2024-12-16 16:35:38.262104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.998 [2024-12-16 16:35:38.262113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.998 [2024-12-16 16:35:38.262117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.998 [2024-12-16 16:35:38.262121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262133] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:49.999 [2024-12-16 16:35:38.262139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:49.999 [2024-12-16 16:35:38.262147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:49.999 [2024-12-16 16:35:38.262158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.262172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.262185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.262317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.262323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.262326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:49.999 [2024-12-16 16:35:38.262340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:49.999 [2024-12-16 16:35:38.262346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.262359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.262369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.262427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.262433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.262436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:49.999 [2024-12-16 16:35:38.262451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:49.999 [2024-12-16 16:35:38.262457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.262468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.262478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.262536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.262542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.262545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:49.999 [2024-12-16 16:35:38.262560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.262575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.262584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.262642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.262647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.262650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262657] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:49.999 [2024-12-16 16:35:38.262662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:49.999 [2024-12-16 16:35:38.262668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:49.999 [2024-12-16 16:35:38.262775] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:49.999 [2024-12-16 16:35:38.262780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:49.999 [2024-12-16 16:35:38.262787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.262799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.262808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.262869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.262875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.262878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:49.999 [2024-12-16 16:35:38.262893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.262906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.262915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.262980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.262985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.262988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.262991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.262995] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:49.999 [2024-12-16 16:35:38.263001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:49.999 [2024-12-16 16:35:38.263008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:49.999 [2024-12-16 16:35:38.263014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:49.999 [2024-12-16 16:35:38.263022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.263025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.263031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.999 [2024-12-16 16:35:38.263040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:49.999 [2024-12-16 16:35:38.263127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.999 [2024-12-16 16:35:38.263134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.999 [2024-12-16 16:35:38.263137] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.263141] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x182ded0): datao=0, datal=4096, cccid=0 00:29:49.999 [2024-12-16 16:35:38.263145] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1899540) on tqpair(0x182ded0): expected_datao=0, payload_size=4096 00:29:49.999 [2024-12-16 16:35:38.263149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.263162] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.263167] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.304229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.999 [2024-12-16 16:35:38.304242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.999 [2024-12-16 16:35:38.304246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.304250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:49.999 [2024-12-16 16:35:38.304258] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:49.999 [2024-12-16 16:35:38.304263] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:49.999 [2024-12-16 16:35:38.304267] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:49.999 [2024-12-16 16:35:38.304274] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:49.999 [2024-12-16 16:35:38.304280] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:49.999 [2024-12-16 16:35:38.304285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:49.999 [2024-12-16 16:35:38.304298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:49.999 [2024-12-16 16:35:38.304308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.304312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.999 [2024-12-16 16:35:38.304316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:49.999 [2024-12-16 16:35:38.304323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.000 [2024-12-16 16:35:38.304335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:50.000 [2024-12-16 16:35:38.304401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.000 [2024-12-16 16:35:38.304407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.000 [2024-12-16 16:35:38.304410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:50.000 [2024-12-16 16:35:38.304421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.000 [2024-12-16 16:35:38.304437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.000 [2024-12-16 16:35:38.304453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.000 [2024-12-16 16:35:38.304470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.000 [2024-12-16 16:35:38.304485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:50.000 [2024-12-16 16:35:38.304495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:50.000 [2024-12-16 16:35:38.304501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.000 [2024-12-16 16:35:38.304521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899540, cid 0, qid 0 00:29:50.000 [2024-12-16 16:35:38.304526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18996c0, cid 1, qid 0 00:29:50.000 [2024-12-16 16:35:38.304530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899840, cid 2, qid 0 00:29:50.000 [2024-12-16 16:35:38.304534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.000 [2024-12-16 16:35:38.304538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899b40, cid 4, qid 0 00:29:50.000 [2024-12-16 16:35:38.304627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.000 [2024-12-16 16:35:38.304633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.000 [2024-12-16 16:35:38.304636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899b40) on tqpair=0x182ded0 00:29:50.000 [2024-12-16 16:35:38.304646] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:50.000 [2024-12-16 16:35:38.304651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:50.000 [2024-12-16 16:35:38.304660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.000 [2024-12-16 16:35:38.304679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899b40, cid 4, qid 0 00:29:50.000 [2024-12-16 16:35:38.304750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.000 [2024-12-16 16:35:38.304756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.000 [2024-12-16 16:35:38.304759] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304762] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x182ded0): datao=0, datal=4096, cccid=4 00:29:50.000 [2024-12-16 16:35:38.304766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1899b40) on tqpair(0x182ded0): expected_datao=0, payload_size=4096 00:29:50.000 [2024-12-16 16:35:38.304770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304793] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.000 [2024-12-16 16:35:38.304833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.000 [2024-12-16 16:35:38.304836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899b40) on tqpair=0x182ded0 00:29:50.000 [2024-12-16 16:35:38.304851] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:50.000 [2024-12-16 16:35:38.304874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.000 [2024-12-16 16:35:38.304890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.304896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.304901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.000 [2024-12-16 16:35:38.304914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899b40, cid 4, qid 0 00:29:50.000 [2024-12-16 16:35:38.304918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899cc0, cid 5, qid 0 00:29:50.000 [2024-12-16 16:35:38.305014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.000 [2024-12-16 16:35:38.305019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.000 [2024-12-16 16:35:38.305022] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.305025] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x182ded0): datao=0, datal=1024, cccid=4 00:29:50.000 [2024-12-16 16:35:38.305029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1899b40) on tqpair(0x182ded0): expected_datao=0, payload_size=1024 00:29:50.000 [2024-12-16 16:35:38.305033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.305040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.305043] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.305048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.000 [2024-12-16 16:35:38.305053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.000 [2024-12-16 16:35:38.305056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.305059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899cc0) on tqpair=0x182ded0 00:29:50.000 [2024-12-16 16:35:38.349101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.000 [2024-12-16 16:35:38.349114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.000 [2024-12-16 16:35:38.349117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.349121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899b40) on tqpair=0x182ded0 00:29:50.000 [2024-12-16 16:35:38.349132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.349135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.349142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.000 [2024-12-16 16:35:38.349158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899b40, cid 4, qid 0 00:29:50.000 [2024-12-16 16:35:38.349308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.000 [2024-12-16 16:35:38.349315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.000 [2024-12-16 16:35:38.349318] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.349321] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x182ded0): datao=0, datal=3072, cccid=4 00:29:50.000 [2024-12-16 16:35:38.349325] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1899b40) on tqpair(0x182ded0): expected_datao=0, payload_size=3072 00:29:50.000 [2024-12-16 16:35:38.349329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.349343] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.349347] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.390184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.000 [2024-12-16 16:35:38.390199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.000 [2024-12-16 16:35:38.390204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.390207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899b40) on tqpair=0x182ded0 00:29:50.000 [2024-12-16 16:35:38.390217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.390221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x182ded0) 00:29:50.000 [2024-12-16 16:35:38.390229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.000 [2024-12-16 16:35:38.390245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1899b40, cid 4, qid 0 00:29:50.000 [2024-12-16 16:35:38.390346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.000 [2024-12-16 16:35:38.390351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.000 [2024-12-16 16:35:38.390355] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.000 [2024-12-16 16:35:38.390358] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x182ded0): datao=0, datal=8, cccid=4 00:29:50.000 [2024-12-16 16:35:38.390362] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1899b40) on tqpair(0x182ded0): expected_datao=0, payload_size=8 00:29:50.000 [2024-12-16 16:35:38.390366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.001 [2024-12-16 16:35:38.390372] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.001 [2024-12-16 16:35:38.390379] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.001 [2024-12-16 16:35:38.433104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.001 [2024-12-16 16:35:38.433115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.001 [2024-12-16 16:35:38.433119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.001 [2024-12-16 16:35:38.433122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899b40) on tqpair=0x182ded0 00:29:50.001 ===================================================== 00:29:50.001 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:50.001 ===================================================== 00:29:50.001 Controller Capabilities/Features 00:29:50.001 ================================ 00:29:50.001 Vendor ID: 0000 00:29:50.001 Subsystem Vendor ID: 0000 00:29:50.001 Serial Number: .................... 00:29:50.001 Model Number: ........................................ 00:29:50.001 Firmware Version: 25.01 00:29:50.001 Recommended Arb Burst: 0 00:29:50.001 IEEE OUI Identifier: 00 00 00 00:29:50.001 Multi-path I/O 00:29:50.001 May have multiple subsystem ports: No 00:29:50.001 May have multiple controllers: No 00:29:50.001 Associated with SR-IOV VF: No 00:29:50.001 Max Data Transfer Size: 131072 00:29:50.001 Max Number of Namespaces: 0 00:29:50.001 Max Number of I/O Queues: 1024 00:29:50.001 NVMe Specification Version (VS): 1.3 00:29:50.001 NVMe Specification Version (Identify): 1.3 00:29:50.001 Maximum Queue Entries: 128 00:29:50.001 Contiguous Queues Required: Yes 00:29:50.001 Arbitration Mechanisms Supported 00:29:50.001 Weighted Round Robin: Not Supported 00:29:50.001 Vendor Specific: Not Supported 00:29:50.001 Reset Timeout: 15000 ms 00:29:50.001 Doorbell Stride: 4 bytes 00:29:50.001 NVM Subsystem Reset: Not Supported 00:29:50.001 Command Sets Supported 00:29:50.001 NVM Command Set: Supported 00:29:50.001 Boot Partition: Not Supported 00:29:50.001 Memory Page Size Minimum: 4096 bytes 00:29:50.001 Memory Page Size Maximum: 4096 bytes 00:29:50.001 Persistent Memory Region: Not Supported 00:29:50.001 Optional Asynchronous Events Supported 00:29:50.001 Namespace Attribute Notices: Not Supported 00:29:50.001 Firmware Activation Notices: Not Supported 00:29:50.001 ANA Change Notices: Not Supported 00:29:50.001 PLE Aggregate Log Change Notices: Not Supported 00:29:50.001 LBA Status Info Alert Notices: Not Supported 00:29:50.001 EGE Aggregate Log Change Notices: Not Supported 00:29:50.001 Normal NVM Subsystem Shutdown event: Not Supported 00:29:50.001 Zone Descriptor Change Notices: Not Supported 00:29:50.001 Discovery Log Change Notices: Supported 00:29:50.001 Controller Attributes 00:29:50.001 128-bit Host Identifier: Not Supported 00:29:50.001 Non-Operational Permissive Mode: Not Supported 00:29:50.001 NVM Sets: Not Supported 00:29:50.001 Read Recovery Levels: Not Supported 00:29:50.001 Endurance Groups: Not Supported 00:29:50.001 Predictable Latency Mode: Not Supported 00:29:50.001 Traffic Based Keep ALive: Not Supported 00:29:50.001 Namespace Granularity: Not Supported 00:29:50.001 SQ Associations: Not Supported 00:29:50.001 UUID List: Not Supported 00:29:50.001 Multi-Domain Subsystem: Not Supported 00:29:50.001 Fixed Capacity Management: Not Supported 00:29:50.001 Variable Capacity Management: Not Supported 00:29:50.001 Delete Endurance Group: Not Supported 00:29:50.001 Delete NVM Set: Not Supported 00:29:50.001 Extended LBA Formats Supported: Not Supported 00:29:50.001 Flexible Data Placement Supported: Not Supported 00:29:50.001 00:29:50.001 Controller Memory Buffer Support 00:29:50.001 ================================ 00:29:50.001 Supported: No 00:29:50.001 00:29:50.001 Persistent Memory Region Support 00:29:50.001 ================================ 00:29:50.001 Supported: No 00:29:50.001 00:29:50.001 Admin Command Set Attributes 00:29:50.001 ============================ 00:29:50.001 Security Send/Receive: Not Supported 00:29:50.001 Format NVM: Not Supported 00:29:50.001 Firmware Activate/Download: Not Supported 00:29:50.001 Namespace Management: Not Supported 00:29:50.001 Device Self-Test: Not Supported 00:29:50.001 Directives: Not Supported 00:29:50.001 NVMe-MI: Not Supported 00:29:50.001 Virtualization Management: Not Supported 00:29:50.001 Doorbell Buffer Config: Not Supported 00:29:50.001 Get LBA Status Capability: Not Supported 00:29:50.001 Command & Feature Lockdown Capability: Not Supported 00:29:50.001 Abort Command Limit: 1 00:29:50.001 Async Event Request Limit: 4 00:29:50.001 Number of Firmware Slots: N/A 00:29:50.001 Firmware Slot 1 Read-Only: N/A 00:29:50.001 Firmware Activation Without Reset: N/A 00:29:50.001 Multiple Update Detection Support: N/A 00:29:50.001 Firmware Update Granularity: No Information Provided 00:29:50.001 Per-Namespace SMART Log: No 00:29:50.001 Asymmetric Namespace Access Log Page: Not Supported 00:29:50.001 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:50.001 Command Effects Log Page: Not Supported 00:29:50.001 Get Log Page Extended Data: Supported 00:29:50.001 Telemetry Log Pages: Not Supported 00:29:50.001 Persistent Event Log Pages: Not Supported 00:29:50.001 Supported Log Pages Log Page: May Support 00:29:50.001 Commands Supported & Effects Log Page: Not Supported 00:29:50.001 Feature Identifiers & Effects Log Page:May Support 00:29:50.001 NVMe-MI Commands & Effects Log Page: May Support 00:29:50.001 Data Area 4 for Telemetry Log: Not Supported 00:29:50.001 Error Log Page Entries Supported: 128 00:29:50.001 Keep Alive: Not Supported 00:29:50.001 00:29:50.001 NVM Command Set Attributes 00:29:50.001 ========================== 00:29:50.001 Submission Queue Entry Size 00:29:50.001 Max: 1 00:29:50.001 Min: 1 00:29:50.001 Completion Queue Entry Size 00:29:50.001 Max: 1 00:29:50.001 Min: 1 00:29:50.001 Number of Namespaces: 0 00:29:50.001 Compare Command: Not Supported 00:29:50.001 Write Uncorrectable Command: Not Supported 00:29:50.001 Dataset Management Command: Not Supported 00:29:50.001 Write Zeroes Command: Not Supported 00:29:50.001 Set Features Save Field: Not Supported 00:29:50.001 Reservations: Not Supported 00:29:50.001 Timestamp: Not Supported 00:29:50.001 Copy: Not Supported 00:29:50.001 Volatile Write Cache: Not Present 00:29:50.001 Atomic Write Unit (Normal): 1 00:29:50.001 Atomic Write Unit (PFail): 1 00:29:50.001 Atomic Compare & Write Unit: 1 00:29:50.001 Fused Compare & Write: Supported 00:29:50.001 Scatter-Gather List 00:29:50.001 SGL Command Set: Supported 00:29:50.001 SGL Keyed: Supported 00:29:50.001 SGL Bit Bucket Descriptor: Not Supported 00:29:50.001 SGL Metadata Pointer: Not Supported 00:29:50.001 Oversized SGL: Not Supported 00:29:50.001 SGL Metadata Address: Not Supported 00:29:50.001 SGL Offset: Supported 00:29:50.001 Transport SGL Data Block: Not Supported 00:29:50.001 Replay Protected Memory Block: Not Supported 00:29:50.001 00:29:50.001 Firmware Slot Information 00:29:50.001 ========================= 00:29:50.001 Active slot: 0 00:29:50.001 00:29:50.001 00:29:50.001 Error Log 00:29:50.001 ========= 00:29:50.001 00:29:50.001 Active Namespaces 00:29:50.001 ================= 00:29:50.001 Discovery Log Page 00:29:50.001 ================== 00:29:50.001 Generation Counter: 2 00:29:50.001 Number of Records: 2 00:29:50.001 Record Format: 0 00:29:50.001 00:29:50.001 Discovery Log Entry 0 00:29:50.001 ---------------------- 00:29:50.001 Transport Type: 3 (TCP) 00:29:50.001 Address Family: 1 (IPv4) 00:29:50.001 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:50.001 Entry Flags: 00:29:50.001 Duplicate Returned Information: 1 00:29:50.001 Explicit Persistent Connection Support for Discovery: 1 00:29:50.002 Transport Requirements: 00:29:50.002 Secure Channel: Not Required 00:29:50.002 Port ID: 0 (0x0000) 00:29:50.002 Controller ID: 65535 (0xffff) 00:29:50.002 Admin Max SQ Size: 128 00:29:50.002 Transport Service Identifier: 4420 00:29:50.002 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:50.002 Transport Address: 10.0.0.2 00:29:50.002 Discovery Log Entry 1 00:29:50.002 ---------------------- 00:29:50.002 Transport Type: 3 (TCP) 00:29:50.002 Address Family: 1 (IPv4) 00:29:50.002 Subsystem Type: 2 (NVM Subsystem) 00:29:50.002 Entry Flags: 00:29:50.002 Duplicate Returned Information: 0 00:29:50.002 Explicit Persistent Connection Support for Discovery: 0 00:29:50.002 Transport Requirements: 00:29:50.002 Secure Channel: Not Required 00:29:50.002 Port ID: 0 (0x0000) 00:29:50.002 Controller ID: 65535 (0xffff) 00:29:50.002 Admin Max SQ Size: 128 00:29:50.002 Transport Service Identifier: 4420 00:29:50.002 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:50.002 Transport Address: 10.0.0.2 [2024-12-16 16:35:38.433206] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:50.002 [2024-12-16 16:35:38.433218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899540) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.002 [2024-12-16 16:35:38.433229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18996c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.002 [2024-12-16 16:35:38.433237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1899840) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.002 [2024-12-16 16:35:38.433246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.002 [2024-12-16 16:35:38.433257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433480] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:50.002 [2024-12-16 16:35:38.433485] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:50.002 [2024-12-16 16:35:38.433494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.433912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.433921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.433978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.433983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.433986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.433990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.433998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.434002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.434004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.434010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.434019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.434077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.434082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.434085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.434088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.002 [2024-12-16 16:35:38.434101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.434105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.434108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.002 [2024-12-16 16:35:38.434114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.002 [2024-12-16 16:35:38.434124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.002 [2024-12-16 16:35:38.434197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.002 [2024-12-16 16:35:38.434203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.002 [2024-12-16 16:35:38.434206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.002 [2024-12-16 16:35:38.434209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.434920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.434925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.434928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.434939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.434946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.434951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.434960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.435019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.435025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.435028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.435039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.435051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.435060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.435121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.435127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.435130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.435141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.435153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.435164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.435236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.435241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.435244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.435256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.435267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.435277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.435348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.435353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.435356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.435368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.003 [2024-12-16 16:35:38.435380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.003 [2024-12-16 16:35:38.435389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.003 [2024-12-16 16:35:38.435450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.003 [2024-12-16 16:35:38.435456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.003 [2024-12-16 16:35:38.435459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.003 [2024-12-16 16:35:38.435471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.003 [2024-12-16 16:35:38.435474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.435482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.435492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.435549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.435555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.435558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.435569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.435581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.435591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.435647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.435653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.435655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.435667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.435679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.435688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.435750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.435755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.435758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.435770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.435782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.435791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.435864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.435870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.435873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.435884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.435896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.435905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.435961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.435966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.435969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.435981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.435987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.435992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.436001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.436061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.436068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.436071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.436082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.436099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.436110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.436173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.436179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.436182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.436192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.436204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.436214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.436271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.436277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.436280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.436291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.436297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.436303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.436312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.440103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.440113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.440116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.440119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.440130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.440134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.440137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x182ded0) 00:29:50.004 [2024-12-16 16:35:38.440143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.004 [2024-12-16 16:35:38.440155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18999c0, cid 3, qid 0 00:29:50.004 [2024-12-16 16:35:38.440212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.004 [2024-12-16 16:35:38.440218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.004 [2024-12-16 16:35:38.440224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.440227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18999c0) on tqpair=0x182ded0 00:29:50.004 [2024-12-16 16:35:38.440234] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:29:50.004 00:29:50.004 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:50.004 [2024-12-16 16:35:38.476821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:50.004 [2024-12-16 16:35:38.476859] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116379 ] 00:29:50.004 [2024-12-16 16:35:38.518299] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:50.004 [2024-12-16 16:35:38.518339] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:50.004 [2024-12-16 16:35:38.518343] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:50.004 [2024-12-16 16:35:38.518354] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:50.004 [2024-12-16 16:35:38.518361] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:50.004 [2024-12-16 16:35:38.518731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:50.004 [2024-12-16 16:35:38.518758] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19aeed0 0 00:29:50.004 [2024-12-16 16:35:38.529112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:50.004 [2024-12-16 16:35:38.529127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:50.004 [2024-12-16 16:35:38.529131] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:50.004 [2024-12-16 16:35:38.529134] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:50.004 [2024-12-16 16:35:38.529158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.529163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.004 [2024-12-16 16:35:38.529166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.529176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:50.005 [2024-12-16 16:35:38.529192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540133] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:50.005 [2024-12-16 16:35:38.540139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:50.005 [2024-12-16 16:35:38.540144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:50.005 [2024-12-16 16:35:38.540154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:50.005 [2024-12-16 16:35:38.540290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:50.005 [2024-12-16 16:35:38.540296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:50.005 [2024-12-16 16:35:38.540403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:50.005 [2024-12-16 16:35:38.540409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:50.005 [2024-12-16 16:35:38.540517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540612] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:50.005 [2024-12-16 16:35:38.540616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:50.005 [2024-12-16 16:35:38.540623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:50.005 [2024-12-16 16:35:38.540730] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:50.005 [2024-12-16 16:35:38.540734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:50.005 [2024-12-16 16:35:38.540741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:50.005 [2024-12-16 16:35:38.540843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.540937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.540942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.540945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.540952] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:50.005 [2024-12-16 16:35:38.540957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:50.005 [2024-12-16 16:35:38.540963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:50.005 [2024-12-16 16:35:38.540972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:50.005 [2024-12-16 16:35:38.540979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.540983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.005 [2024-12-16 16:35:38.540988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-16 16:35:38.540998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.005 [2024-12-16 16:35:38.541084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.005 [2024-12-16 16:35:38.541091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.005 [2024-12-16 16:35:38.541099] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.541103] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=4096, cccid=0 00:29:50.005 [2024-12-16 16:35:38.541107] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1a540) on tqpair(0x19aeed0): expected_datao=0, payload_size=4096 00:29:50.005 [2024-12-16 16:35:38.541111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.541121] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.541125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.583102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.005 [2024-12-16 16:35:38.583115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.005 [2024-12-16 16:35:38.583118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.005 [2024-12-16 16:35:38.583122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.005 [2024-12-16 16:35:38.583130] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:50.005 [2024-12-16 16:35:38.583134] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:50.005 [2024-12-16 16:35:38.583138] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:50.005 [2024-12-16 16:35:38.583142] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:50.005 [2024-12-16 16:35:38.583146] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:50.005 [2024-12-16 16:35:38.583150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.006 [2024-12-16 16:35:38.583201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.006 [2024-12-16 16:35:38.583327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.006 [2024-12-16 16:35:38.583335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.006 [2024-12-16 16:35:38.583338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.006 [2024-12-16 16:35:38.583349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.006 [2024-12-16 16:35:38.583371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.006 [2024-12-16 16:35:38.583389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.006 [2024-12-16 16:35:38.583407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.006 [2024-12-16 16:35:38.583422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-16 16:35:38.583459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a540, cid 0, qid 0 00:29:50.006 [2024-12-16 16:35:38.583464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a6c0, cid 1, qid 0 00:29:50.006 [2024-12-16 16:35:38.583468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a840, cid 2, qid 0 00:29:50.006 [2024-12-16 16:35:38.583472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.006 [2024-12-16 16:35:38.583476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.006 [2024-12-16 16:35:38.583591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.006 [2024-12-16 16:35:38.583597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.006 [2024-12-16 16:35:38.583600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.006 [2024-12-16 16:35:38.583608] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:50.006 [2024-12-16 16:35:38.583612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.006 [2024-12-16 16:35:38.583658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.006 [2024-12-16 16:35:38.583741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.006 [2024-12-16 16:35:38.583747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.006 [2024-12-16 16:35:38.583750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.006 [2024-12-16 16:35:38.583807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:50.006 [2024-12-16 16:35:38.583823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.006 [2024-12-16 16:35:38.583834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-16 16:35:38.583844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.006 [2024-12-16 16:35:38.583923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.006 [2024-12-16 16:35:38.583930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.006 [2024-12-16 16:35:38.583933] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583936] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=4096, cccid=4 00:29:50.006 [2024-12-16 16:35:38.583940] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ab40) on tqpair(0x19aeed0): expected_datao=0, payload_size=4096 00:29:50.006 [2024-12-16 16:35:38.583944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583958] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.006 [2024-12-16 16:35:38.583963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.625224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.269 [2024-12-16 16:35:38.625237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.269 [2024-12-16 16:35:38.625241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.625244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.269 [2024-12-16 16:35:38.625256] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:50.269 [2024-12-16 16:35:38.625269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.625278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.625285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.625288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.269 [2024-12-16 16:35:38.625294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.269 [2024-12-16 16:35:38.625308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.269 [2024-12-16 16:35:38.625397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.269 [2024-12-16 16:35:38.625404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.269 [2024-12-16 16:35:38.625407] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.625410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=4096, cccid=4 00:29:50.269 [2024-12-16 16:35:38.625414] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ab40) on tqpair(0x19aeed0): expected_datao=0, payload_size=4096 00:29:50.269 [2024-12-16 16:35:38.625418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.625437] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.625441] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.666230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.269 [2024-12-16 16:35:38.666240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.269 [2024-12-16 16:35:38.666244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.666247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.269 [2024-12-16 16:35:38.666261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.666270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.666277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.666281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.269 [2024-12-16 16:35:38.666287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.269 [2024-12-16 16:35:38.666299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.269 [2024-12-16 16:35:38.666370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.269 [2024-12-16 16:35:38.666376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.269 [2024-12-16 16:35:38.666379] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.666383] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=4096, cccid=4 00:29:50.269 [2024-12-16 16:35:38.666387] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ab40) on tqpair(0x19aeed0): expected_datao=0, payload_size=4096 00:29:50.269 [2024-12-16 16:35:38.666390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.666401] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.666404] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.269 [2024-12-16 16:35:38.707251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.269 [2024-12-16 16:35:38.707254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.269 [2024-12-16 16:35:38.707267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707305] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:50.269 [2024-12-16 16:35:38.707309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:50.269 [2024-12-16 16:35:38.707313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:50.269 [2024-12-16 16:35:38.707327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.269 [2024-12-16 16:35:38.707338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.269 [2024-12-16 16:35:38.707343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19aeed0) 00:29:50.269 [2024-12-16 16:35:38.707355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.269 [2024-12-16 16:35:38.707370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.269 [2024-12-16 16:35:38.707375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1acc0, cid 5, qid 0 00:29:50.269 [2024-12-16 16:35:38.707492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.269 [2024-12-16 16:35:38.707498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.269 [2024-12-16 16:35:38.707501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.269 [2024-12-16 16:35:38.707510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.269 [2024-12-16 16:35:38.707515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.269 [2024-12-16 16:35:38.707518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1acc0) on tqpair=0x19aeed0 00:29:50.269 [2024-12-16 16:35:38.707529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19aeed0) 00:29:50.269 [2024-12-16 16:35:38.707538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.269 [2024-12-16 16:35:38.707548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1acc0, cid 5, qid 0 00:29:50.269 [2024-12-16 16:35:38.707615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.269 [2024-12-16 16:35:38.707620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.269 [2024-12-16 16:35:38.707623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1acc0) on tqpair=0x19aeed0 00:29:50.269 [2024-12-16 16:35:38.707634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.269 [2024-12-16 16:35:38.707638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19aeed0) 00:29:50.269 [2024-12-16 16:35:38.707645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.269 [2024-12-16 16:35:38.707655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1acc0, cid 5, qid 0 00:29:50.270 [2024-12-16 16:35:38.707742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.270 [2024-12-16 16:35:38.707747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.270 [2024-12-16 16:35:38.707750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1acc0) on tqpair=0x19aeed0 00:29:50.270 [2024-12-16 16:35:38.707761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19aeed0) 00:29:50.270 [2024-12-16 16:35:38.707770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.270 [2024-12-16 16:35:38.707779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1acc0, cid 5, qid 0 00:29:50.270 [2024-12-16 16:35:38.707860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.270 [2024-12-16 16:35:38.707866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.270 [2024-12-16 16:35:38.707869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1acc0) on tqpair=0x19aeed0 00:29:50.270 [2024-12-16 16:35:38.707885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19aeed0) 00:29:50.270 [2024-12-16 16:35:38.707895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.270 [2024-12-16 16:35:38.707901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19aeed0) 00:29:50.270 [2024-12-16 16:35:38.707910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.270 [2024-12-16 16:35:38.707915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19aeed0) 00:29:50.270 [2024-12-16 16:35:38.707924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.270 [2024-12-16 16:35:38.707930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.707933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19aeed0) 00:29:50.270 [2024-12-16 16:35:38.707938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.270 [2024-12-16 16:35:38.707949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1acc0, cid 5, qid 0 00:29:50.270 [2024-12-16 16:35:38.707953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ab40, cid 4, qid 0 00:29:50.270 [2024-12-16 16:35:38.707957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ae40, cid 6, qid 0 00:29:50.270 [2024-12-16 16:35:38.707961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1afc0, cid 7, qid 0 00:29:50.270 [2024-12-16 16:35:38.708091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.270 [2024-12-16 16:35:38.712103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.270 [2024-12-16 16:35:38.712107] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=8192, cccid=5 00:29:50.270 [2024-12-16 16:35:38.712119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1acc0) on tqpair(0x19aeed0): expected_datao=0, payload_size=8192 00:29:50.270 [2024-12-16 16:35:38.712123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712136] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712140] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.270 [2024-12-16 16:35:38.712153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.270 [2024-12-16 16:35:38.712156] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712159] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=512, cccid=4 00:29:50.270 [2024-12-16 16:35:38.712163] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ab40) on tqpair(0x19aeed0): expected_datao=0, payload_size=512 00:29:50.270 [2024-12-16 16:35:38.712167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712173] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712176] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.270 [2024-12-16 16:35:38.712185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.270 [2024-12-16 16:35:38.712189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=512, cccid=6 00:29:50.270 [2024-12-16 16:35:38.712195] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ae40) on tqpair(0x19aeed0): expected_datao=0, payload_size=512 00:29:50.270 [2024-12-16 16:35:38.712199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.270 [2024-12-16 16:35:38.712217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.270 [2024-12-16 16:35:38.712220] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712223] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19aeed0): datao=0, datal=4096, cccid=7 00:29:50.270 [2024-12-16 16:35:38.712227] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1afc0) on tqpair(0x19aeed0): expected_datao=0, payload_size=4096 00:29:50.270 [2024-12-16 16:35:38.712231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712236] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712239] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.270 [2024-12-16 16:35:38.712249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.270 [2024-12-16 16:35:38.712252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1acc0) on tqpair=0x19aeed0 00:29:50.270 [2024-12-16 16:35:38.712265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.270 [2024-12-16 16:35:38.712271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.270 [2024-12-16 16:35:38.712274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ab40) on tqpair=0x19aeed0 00:29:50.270 [2024-12-16 16:35:38.712285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.270 [2024-12-16 16:35:38.712291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.270 [2024-12-16 16:35:38.712294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ae40) on tqpair=0x19aeed0 00:29:50.270 [2024-12-16 16:35:38.712303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.270 [2024-12-16 16:35:38.712308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.270 [2024-12-16 16:35:38.712311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.270 [2024-12-16 16:35:38.712315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1afc0) on tqpair=0x19aeed0 00:29:50.270 ===================================================== 00:29:50.270 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.270 ===================================================== 00:29:50.270 Controller Capabilities/Features 00:29:50.270 ================================ 00:29:50.270 Vendor ID: 8086 00:29:50.270 Subsystem Vendor ID: 8086 00:29:50.270 Serial Number: SPDK00000000000001 00:29:50.270 Model Number: SPDK bdev Controller 00:29:50.270 Firmware Version: 25.01 00:29:50.270 Recommended Arb Burst: 6 00:29:50.270 IEEE OUI Identifier: e4 d2 5c 00:29:50.270 Multi-path I/O 00:29:50.270 May have multiple subsystem ports: Yes 00:29:50.270 May have multiple controllers: Yes 00:29:50.270 Associated with SR-IOV VF: No 00:29:50.270 Max Data Transfer Size: 131072 00:29:50.270 Max Number of Namespaces: 32 00:29:50.270 Max Number of I/O Queues: 127 00:29:50.270 NVMe Specification Version (VS): 1.3 00:29:50.270 NVMe Specification Version (Identify): 1.3 00:29:50.270 Maximum Queue Entries: 128 00:29:50.270 Contiguous Queues Required: Yes 00:29:50.270 Arbitration Mechanisms Supported 00:29:50.270 Weighted Round Robin: Not Supported 00:29:50.270 Vendor Specific: Not Supported 00:29:50.270 Reset Timeout: 15000 ms 00:29:50.270 Doorbell Stride: 4 bytes 00:29:50.270 NVM Subsystem Reset: Not Supported 00:29:50.270 Command Sets Supported 00:29:50.270 NVM Command Set: Supported 00:29:50.270 Boot Partition: Not Supported 00:29:50.270 Memory Page Size Minimum: 4096 bytes 00:29:50.270 Memory Page Size Maximum: 4096 bytes 00:29:50.270 Persistent Memory Region: Not Supported 00:29:50.270 Optional Asynchronous Events Supported 00:29:50.270 Namespace Attribute Notices: Supported 00:29:50.270 Firmware Activation Notices: Not Supported 00:29:50.270 ANA Change Notices: Not Supported 00:29:50.270 PLE Aggregate Log Change Notices: Not Supported 00:29:50.270 LBA Status Info Alert Notices: Not Supported 00:29:50.270 EGE Aggregate Log Change Notices: Not Supported 00:29:50.270 Normal NVM Subsystem Shutdown event: Not Supported 00:29:50.270 Zone Descriptor Change Notices: Not Supported 00:29:50.270 Discovery Log Change Notices: Not Supported 00:29:50.270 Controller Attributes 00:29:50.270 128-bit Host Identifier: Supported 00:29:50.270 Non-Operational Permissive Mode: Not Supported 00:29:50.270 NVM Sets: Not Supported 00:29:50.270 Read Recovery Levels: Not Supported 00:29:50.270 Endurance Groups: Not Supported 00:29:50.270 Predictable Latency Mode: Not Supported 00:29:50.270 Traffic Based Keep ALive: Not Supported 00:29:50.271 Namespace Granularity: Not Supported 00:29:50.271 SQ Associations: Not Supported 00:29:50.271 UUID List: Not Supported 00:29:50.271 Multi-Domain Subsystem: Not Supported 00:29:50.271 Fixed Capacity Management: Not Supported 00:29:50.271 Variable Capacity Management: Not Supported 00:29:50.271 Delete Endurance Group: Not Supported 00:29:50.271 Delete NVM Set: Not Supported 00:29:50.271 Extended LBA Formats Supported: Not Supported 00:29:50.271 Flexible Data Placement Supported: Not Supported 00:29:50.271 00:29:50.271 Controller Memory Buffer Support 00:29:50.271 ================================ 00:29:50.271 Supported: No 00:29:50.271 00:29:50.271 Persistent Memory Region Support 00:29:50.271 ================================ 00:29:50.271 Supported: No 00:29:50.271 00:29:50.271 Admin Command Set Attributes 00:29:50.271 ============================ 00:29:50.271 Security Send/Receive: Not Supported 00:29:50.271 Format NVM: Not Supported 00:29:50.271 Firmware Activate/Download: Not Supported 00:29:50.271 Namespace Management: Not Supported 00:29:50.271 Device Self-Test: Not Supported 00:29:50.271 Directives: Not Supported 00:29:50.271 NVMe-MI: Not Supported 00:29:50.271 Virtualization Management: Not Supported 00:29:50.271 Doorbell Buffer Config: Not Supported 00:29:50.271 Get LBA Status Capability: Not Supported 00:29:50.271 Command & Feature Lockdown Capability: Not Supported 00:29:50.271 Abort Command Limit: 4 00:29:50.271 Async Event Request Limit: 4 00:29:50.271 Number of Firmware Slots: N/A 00:29:50.271 Firmware Slot 1 Read-Only: N/A 00:29:50.271 Firmware Activation Without Reset: N/A 00:29:50.271 Multiple Update Detection Support: N/A 00:29:50.271 Firmware Update Granularity: No Information Provided 00:29:50.271 Per-Namespace SMART Log: No 00:29:50.271 Asymmetric Namespace Access Log Page: Not Supported 00:29:50.271 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:50.271 Command Effects Log Page: Supported 00:29:50.271 Get Log Page Extended Data: Supported 00:29:50.271 Telemetry Log Pages: Not Supported 00:29:50.271 Persistent Event Log Pages: Not Supported 00:29:50.271 Supported Log Pages Log Page: May Support 00:29:50.271 Commands Supported & Effects Log Page: Not Supported 00:29:50.271 Feature Identifiers & Effects Log Page:May Support 00:29:50.271 NVMe-MI Commands & Effects Log Page: May Support 00:29:50.271 Data Area 4 for Telemetry Log: Not Supported 00:29:50.271 Error Log Page Entries Supported: 128 00:29:50.271 Keep Alive: Supported 00:29:50.271 Keep Alive Granularity: 10000 ms 00:29:50.271 00:29:50.271 NVM Command Set Attributes 00:29:50.271 ========================== 00:29:50.271 Submission Queue Entry Size 00:29:50.271 Max: 64 00:29:50.271 Min: 64 00:29:50.271 Completion Queue Entry Size 00:29:50.271 Max: 16 00:29:50.271 Min: 16 00:29:50.271 Number of Namespaces: 32 00:29:50.271 Compare Command: Supported 00:29:50.271 Write Uncorrectable Command: Not Supported 00:29:50.271 Dataset Management Command: Supported 00:29:50.271 Write Zeroes Command: Supported 00:29:50.271 Set Features Save Field: Not Supported 00:29:50.271 Reservations: Supported 00:29:50.271 Timestamp: Not Supported 00:29:50.271 Copy: Supported 00:29:50.271 Volatile Write Cache: Present 00:29:50.271 Atomic Write Unit (Normal): 1 00:29:50.271 Atomic Write Unit (PFail): 1 00:29:50.271 Atomic Compare & Write Unit: 1 00:29:50.271 Fused Compare & Write: Supported 00:29:50.271 Scatter-Gather List 00:29:50.271 SGL Command Set: Supported 00:29:50.271 SGL Keyed: Supported 00:29:50.271 SGL Bit Bucket Descriptor: Not Supported 00:29:50.271 SGL Metadata Pointer: Not Supported 00:29:50.271 Oversized SGL: Not Supported 00:29:50.271 SGL Metadata Address: Not Supported 00:29:50.271 SGL Offset: Supported 00:29:50.271 Transport SGL Data Block: Not Supported 00:29:50.271 Replay Protected Memory Block: Not Supported 00:29:50.271 00:29:50.271 Firmware Slot Information 00:29:50.271 ========================= 00:29:50.271 Active slot: 1 00:29:50.271 Slot 1 Firmware Revision: 25.01 00:29:50.271 00:29:50.271 00:29:50.271 Commands Supported and Effects 00:29:50.271 ============================== 00:29:50.271 Admin Commands 00:29:50.271 -------------- 00:29:50.271 Get Log Page (02h): Supported 00:29:50.271 Identify (06h): Supported 00:29:50.271 Abort (08h): Supported 00:29:50.271 Set Features (09h): Supported 00:29:50.271 Get Features (0Ah): Supported 00:29:50.271 Asynchronous Event Request (0Ch): Supported 00:29:50.271 Keep Alive (18h): Supported 00:29:50.271 I/O Commands 00:29:50.271 ------------ 00:29:50.271 Flush (00h): Supported LBA-Change 00:29:50.271 Write (01h): Supported LBA-Change 00:29:50.271 Read (02h): Supported 00:29:50.271 Compare (05h): Supported 00:29:50.271 Write Zeroes (08h): Supported LBA-Change 00:29:50.271 Dataset Management (09h): Supported LBA-Change 00:29:50.271 Copy (19h): Supported LBA-Change 00:29:50.271 00:29:50.271 Error Log 00:29:50.271 ========= 00:29:50.271 00:29:50.271 Arbitration 00:29:50.271 =========== 00:29:50.271 Arbitration Burst: 1 00:29:50.271 00:29:50.271 Power Management 00:29:50.271 ================ 00:29:50.271 Number of Power States: 1 00:29:50.271 Current Power State: Power State #0 00:29:50.271 Power State #0: 00:29:50.271 Max Power: 0.00 W 00:29:50.271 Non-Operational State: Operational 00:29:50.271 Entry Latency: Not Reported 00:29:50.271 Exit Latency: Not Reported 00:29:50.271 Relative Read Throughput: 0 00:29:50.271 Relative Read Latency: 0 00:29:50.271 Relative Write Throughput: 0 00:29:50.271 Relative Write Latency: 0 00:29:50.271 Idle Power: Not Reported 00:29:50.271 Active Power: Not Reported 00:29:50.271 Non-Operational Permissive Mode: Not Supported 00:29:50.271 00:29:50.271 Health Information 00:29:50.271 ================== 00:29:50.271 Critical Warnings: 00:29:50.271 Available Spare Space: OK 00:29:50.271 Temperature: OK 00:29:50.271 Device Reliability: OK 00:29:50.271 Read Only: No 00:29:50.271 Volatile Memory Backup: OK 00:29:50.271 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:50.271 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:50.271 Available Spare: 0% 00:29:50.271 Available Spare Threshold: 0% 00:29:50.271 Life Percentage Used:[2024-12-16 16:35:38.712395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19aeed0) 00:29:50.271 [2024-12-16 16:35:38.712406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.271 [2024-12-16 16:35:38.712419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1afc0, cid 7, qid 0 00:29:50.271 [2024-12-16 16:35:38.712537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.271 [2024-12-16 16:35:38.712543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.271 [2024-12-16 16:35:38.712546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1afc0) on tqpair=0x19aeed0 00:29:50.271 [2024-12-16 16:35:38.712578] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:50.271 [2024-12-16 16:35:38.712587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a540) on tqpair=0x19aeed0 00:29:50.271 [2024-12-16 16:35:38.712592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.271 [2024-12-16 16:35:38.712597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a6c0) on tqpair=0x19aeed0 00:29:50.271 [2024-12-16 16:35:38.712601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.271 [2024-12-16 16:35:38.712605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a840) on tqpair=0x19aeed0 00:29:50.271 [2024-12-16 16:35:38.712609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.271 [2024-12-16 16:35:38.712613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.271 [2024-12-16 16:35:38.712617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.271 [2024-12-16 16:35:38.712624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.271 [2024-12-16 16:35:38.712636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.271 [2024-12-16 16:35:38.712648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.271 [2024-12-16 16:35:38.712736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.271 [2024-12-16 16:35:38.712742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.271 [2024-12-16 16:35:38.712745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.271 [2024-12-16 16:35:38.712753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.271 [2024-12-16 16:35:38.712763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.271 [2024-12-16 16:35:38.712769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.271 [2024-12-16 16:35:38.712781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.271 [2024-12-16 16:35:38.712853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.712858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.712861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.712864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.712868] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:50.272 [2024-12-16 16:35:38.712872] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:50.272 [2024-12-16 16:35:38.712880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.712883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.712887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.712892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.712901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.712987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.712993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.712996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.712999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.713945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.713951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.713954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.713965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.713972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.713977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.713986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.714045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.714051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.714054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.714057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.714065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.714068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.714072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.272 [2024-12-16 16:35:38.714077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.272 [2024-12-16 16:35:38.714086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.272 [2024-12-16 16:35:38.714198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.272 [2024-12-16 16:35:38.714204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.272 [2024-12-16 16:35:38.714207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.714210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.272 [2024-12-16 16:35:38.714218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.714222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.272 [2024-12-16 16:35:38.714227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.714302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.714307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.714310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.714321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.714449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.714455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.714458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.714469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.714601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.714607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.714610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.714621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.714751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.714756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.714759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.714771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.714850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.714856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.714859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.714871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.714953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.714959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.714962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.714973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.714980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.714985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.714995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.715054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.715059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.715062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.715074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.715086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.715099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.715206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.715212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.715215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.715226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.715238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.715249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.715316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.715322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.715325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.715337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.715349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.715358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.715457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.715463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.715466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.273 [2024-12-16 16:35:38.715477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.273 [2024-12-16 16:35:38.715489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.273 [2024-12-16 16:35:38.715498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.273 [2024-12-16 16:35:38.715558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.273 [2024-12-16 16:35:38.715563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.273 [2024-12-16 16:35:38.715566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.273 [2024-12-16 16:35:38.715570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.715577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.274 [2024-12-16 16:35:38.715590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.274 [2024-12-16 16:35:38.715599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.274 [2024-12-16 16:35:38.715658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.274 [2024-12-16 16:35:38.715664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.274 [2024-12-16 16:35:38.715667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.715678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.274 [2024-12-16 16:35:38.715690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.274 [2024-12-16 16:35:38.715699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.274 [2024-12-16 16:35:38.715758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.274 [2024-12-16 16:35:38.715764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.274 [2024-12-16 16:35:38.715767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.715778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.274 [2024-12-16 16:35:38.715790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.274 [2024-12-16 16:35:38.715799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.274 [2024-12-16 16:35:38.715911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.274 [2024-12-16 16:35:38.715916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.274 [2024-12-16 16:35:38.715919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.715931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.715937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.274 [2024-12-16 16:35:38.715943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.274 [2024-12-16 16:35:38.715952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.274 [2024-12-16 16:35:38.716012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.274 [2024-12-16 16:35:38.716017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.274 [2024-12-16 16:35:38.716020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.716024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.716032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.716035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.716038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.274 [2024-12-16 16:35:38.716044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.274 [2024-12-16 16:35:38.716053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.274 [2024-12-16 16:35:38.720103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.274 [2024-12-16 16:35:38.720111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.274 [2024-12-16 16:35:38.720114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.720117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.720127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.720130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.720133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19aeed0) 00:29:50.274 [2024-12-16 16:35:38.720139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.274 [2024-12-16 16:35:38.720150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a9c0, cid 3, qid 0 00:29:50.274 [2024-12-16 16:35:38.720235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.274 [2024-12-16 16:35:38.720241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.274 [2024-12-16 16:35:38.720244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.274 [2024-12-16 16:35:38.720247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a9c0) on tqpair=0x19aeed0 00:29:50.274 [2024-12-16 16:35:38.720254] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:29:50.274 0% 00:29:50.274 Data Units Read: 0 00:29:50.274 Data Units Written: 0 00:29:50.274 Host Read Commands: 0 00:29:50.274 Host Write Commands: 0 00:29:50.274 Controller Busy Time: 0 minutes 00:29:50.274 Power Cycles: 0 00:29:50.274 Power On Hours: 0 hours 00:29:50.274 Unsafe Shutdowns: 0 00:29:50.274 Unrecoverable Media Errors: 0 00:29:50.274 Lifetime Error Log Entries: 0 00:29:50.274 Warning Temperature Time: 0 minutes 00:29:50.274 Critical Temperature Time: 0 minutes 00:29:50.274 00:29:50.274 Number of Queues 00:29:50.274 ================ 00:29:50.274 Number of I/O Submission Queues: 127 00:29:50.274 Number of I/O Completion Queues: 127 00:29:50.274 00:29:50.274 Active Namespaces 00:29:50.274 ================= 00:29:50.274 Namespace ID:1 00:29:50.274 Error Recovery Timeout: Unlimited 00:29:50.274 Command Set Identifier: NVM (00h) 00:29:50.274 Deallocate: Supported 00:29:50.274 Deallocated/Unwritten Error: Not Supported 00:29:50.274 Deallocated Read Value: Unknown 00:29:50.274 Deallocate in Write Zeroes: Not Supported 00:29:50.274 Deallocated Guard Field: 0xFFFF 00:29:50.274 Flush: Supported 00:29:50.274 Reservation: Supported 00:29:50.274 Namespace Sharing Capabilities: Multiple Controllers 00:29:50.274 Size (in LBAs): 131072 (0GiB) 00:29:50.274 Capacity (in LBAs): 131072 (0GiB) 00:29:50.274 Utilization (in LBAs): 131072 (0GiB) 00:29:50.274 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:50.274 EUI64: ABCDEF0123456789 00:29:50.274 UUID: fbdf6485-e306-4403-870f-8291a578ef26 00:29:50.274 Thin Provisioning: Not Supported 00:29:50.274 Per-NS Atomic Units: Yes 00:29:50.274 Atomic Boundary Size (Normal): 0 00:29:50.274 Atomic Boundary Size (PFail): 0 00:29:50.274 Atomic Boundary Offset: 0 00:29:50.274 Maximum Single Source Range Length: 65535 00:29:50.274 Maximum Copy Length: 65535 00:29:50.274 Maximum Source Range Count: 1 00:29:50.274 NGUID/EUI64 Never Reused: No 00:29:50.274 Namespace Write Protected: No 00:29:50.274 Number of LBA Formats: 1 00:29:50.274 Current LBA Format: LBA Format #00 00:29:50.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:50.274 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.274 rmmod nvme_tcp 00:29:50.274 rmmod nvme_fabrics 00:29:50.274 rmmod nvme_keyring 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1116305 ']' 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1116305 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1116305 ']' 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1116305 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116305 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116305' 00:29:50.274 killing process with pid 1116305 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1116305 00:29:50.274 16:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1116305 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.534 16:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.069 00:29:53.069 real 0m9.344s 00:29:53.069 user 0m5.984s 00:29:53.069 sys 0m4.777s 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.069 ************************************ 00:29:53.069 END TEST nvmf_identify 00:29:53.069 ************************************ 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.069 ************************************ 00:29:53.069 START TEST nvmf_perf 00:29:53.069 ************************************ 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:53.069 * Looking for test storage... 00:29:53.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:53.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.069 --rc genhtml_branch_coverage=1 00:29:53.069 --rc genhtml_function_coverage=1 00:29:53.069 --rc genhtml_legend=1 00:29:53.069 --rc geninfo_all_blocks=1 00:29:53.069 --rc geninfo_unexecuted_blocks=1 00:29:53.069 00:29:53.069 ' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:53.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.069 --rc genhtml_branch_coverage=1 00:29:53.069 --rc genhtml_function_coverage=1 00:29:53.069 --rc genhtml_legend=1 00:29:53.069 --rc geninfo_all_blocks=1 00:29:53.069 --rc geninfo_unexecuted_blocks=1 00:29:53.069 00:29:53.069 ' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:53.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.069 --rc genhtml_branch_coverage=1 00:29:53.069 --rc genhtml_function_coverage=1 00:29:53.069 --rc genhtml_legend=1 00:29:53.069 --rc geninfo_all_blocks=1 00:29:53.069 --rc geninfo_unexecuted_blocks=1 00:29:53.069 00:29:53.069 ' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:53.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.069 --rc genhtml_branch_coverage=1 00:29:53.069 --rc genhtml_function_coverage=1 00:29:53.069 --rc genhtml_legend=1 00:29:53.069 --rc geninfo_all_blocks=1 00:29:53.069 --rc geninfo_unexecuted_blocks=1 00:29:53.069 00:29:53.069 ' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:53.069 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:53.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.070 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.637 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.638 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.638 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.638 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.638 16:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:29:59.638 00:29:59.638 --- 10.0.0.2 ping statistics --- 00:29:59.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.638 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:29:59.638 00:29:59.638 --- 10.0.0.1 ping statistics --- 00:29:59.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.638 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1119938 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1119938 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1119938 ']' 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:59.638 [2024-12-16 16:35:47.330352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:59.638 [2024-12-16 16:35:47.330404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.638 [2024-12-16 16:35:47.411884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.638 [2024-12-16 16:35:47.435124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.638 [2024-12-16 16:35:47.435161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.638 [2024-12-16 16:35:47.435168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.638 [2024-12-16 16:35:47.435174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.638 [2024-12-16 16:35:47.435179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.638 [2024-12-16 16:35:47.436628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.638 [2024-12-16 16:35:47.436739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.638 [2024-12-16 16:35:47.436851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.638 [2024-12-16 16:35:47.436851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:59.638 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:02.167 16:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:02.167 16:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:02.425 16:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:02.425 16:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:02.425 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:02.425 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:02.425 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:02.425 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:02.425 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:02.683 [2024-12-16 16:35:51.203343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.683 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.942 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:02.942 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.200 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.200 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:03.458 16:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.458 [2024-12-16 16:35:52.035774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.716 16:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.716 16:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:03.716 16:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:03.716 16:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:03.716 16:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:05.092 Initializing NVMe Controllers 00:30:05.092 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:05.092 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:05.092 Initialization complete. Launching workers. 00:30:05.092 ======================================================== 00:30:05.092 Latency(us) 00:30:05.092 Device Information : IOPS MiB/s Average min max 00:30:05.093 PCIE (0000:5e:00.0) NSID 1 from core 0: 99883.56 390.17 319.77 38.89 4244.16 00:30:05.093 ======================================================== 00:30:05.093 Total : 99883.56 390.17 319.77 38.89 4244.16 00:30:05.093 00:30:05.093 16:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.469 Initializing NVMe Controllers 00:30:06.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:06.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:06.469 Initialization complete. Launching workers. 00:30:06.469 ======================================================== 00:30:06.469 Latency(us) 00:30:06.469 Device Information : IOPS MiB/s Average min max 00:30:06.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.00 0.30 13204.05 116.16 45761.08 00:30:06.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 42.00 0.16 24302.41 7214.75 55875.05 00:30:06.469 ======================================================== 00:30:06.469 Total : 120.00 0.47 17088.47 116.16 55875.05 00:30:06.469 00:30:06.469 16:35:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.404 Initializing NVMe Controllers 00:30:07.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:07.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:07.404 Initialization complete. Launching workers. 00:30:07.404 ======================================================== 00:30:07.404 Latency(us) 00:30:07.404 Device Information : IOPS MiB/s Average min max 00:30:07.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11251.99 43.95 2844.30 463.92 6735.13 00:30:07.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3957.00 15.46 8120.22 6393.67 15665.61 00:30:07.404 ======================================================== 00:30:07.404 Total : 15208.98 59.41 4216.96 463.92 15665.61 00:30:07.404 00:30:07.404 16:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:07.404 16:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:07.404 16:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.692 Initializing NVMe Controllers 00:30:10.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.692 Controller IO queue size 128, less than required. 00:30:10.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.692 Controller IO queue size 128, less than required. 00:30:10.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.692 Initialization complete. Launching workers. 00:30:10.692 ======================================================== 00:30:10.692 Latency(us) 00:30:10.692 Device Information : IOPS MiB/s Average min max 00:30:10.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1851.99 463.00 70386.73 49267.62 108547.24 00:30:10.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.00 148.50 220010.50 79824.61 347057.40 00:30:10.692 ======================================================== 00:30:10.692 Total : 2445.99 611.50 106722.19 49267.62 347057.40 00:30:10.692 00:30:10.692 16:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:10.692 No valid NVMe controllers or AIO or URING devices found 00:30:10.692 Initializing NVMe Controllers 00:30:10.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.692 Controller IO queue size 128, less than required. 00:30:10.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.692 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:10.692 Controller IO queue size 128, less than required. 00:30:10.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.692 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:10.692 WARNING: Some requested NVMe devices were skipped 00:30:10.693 16:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:12.607 Initializing NVMe Controllers 00:30:12.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.607 Controller IO queue size 128, less than required. 00:30:12.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.607 Controller IO queue size 128, less than required. 00:30:12.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:12.607 Initialization complete. Launching workers. 00:30:12.607 00:30:12.607 ==================== 00:30:12.607 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:12.607 TCP transport: 00:30:12.607 polls: 11450 00:30:12.607 idle_polls: 8072 00:30:12.607 sock_completions: 3378 00:30:12.607 nvme_completions: 6313 00:30:12.607 submitted_requests: 9564 00:30:12.607 queued_requests: 1 00:30:12.607 00:30:12.607 ==================== 00:30:12.607 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:12.607 TCP transport: 00:30:12.607 polls: 11724 00:30:12.607 idle_polls: 7733 00:30:12.607 sock_completions: 3991 00:30:12.607 nvme_completions: 6765 00:30:12.607 submitted_requests: 10092 00:30:12.607 queued_requests: 1 00:30:12.607 ======================================================== 00:30:12.607 Latency(us) 00:30:12.607 Device Information : IOPS MiB/s Average min max 00:30:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1576.99 394.25 82721.56 47555.06 139954.95 00:30:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1689.92 422.48 76507.73 42223.02 119735.35 00:30:12.607 ======================================================== 00:30:12.607 Total : 3266.92 816.73 79507.25 42223.02 139954.95 00:30:12.607 00:30:12.869 16:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:12.869 16:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.869 16:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:12.869 16:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:12.869 16:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0f5248f1-652f-4448-b026-ae0c5d756721 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0f5248f1-652f-4448-b026-ae0c5d756721 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0f5248f1-652f-4448-b026-ae0c5d756721 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:16.154 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:16.412 { 00:30:16.412 "uuid": "0f5248f1-652f-4448-b026-ae0c5d756721", 00:30:16.412 "name": "lvs_0", 00:30:16.412 "base_bdev": "Nvme0n1", 00:30:16.412 "total_data_clusters": 238234, 00:30:16.412 "free_clusters": 238234, 00:30:16.412 "block_size": 512, 00:30:16.412 "cluster_size": 4194304 00:30:16.412 } 00:30:16.412 ]' 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0f5248f1-652f-4448-b026-ae0c5d756721") .free_clusters' 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0f5248f1-652f-4448-b026-ae0c5d756721") .cluster_size' 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:16.412 952936 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:16.412 16:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f5248f1-652f-4448-b026-ae0c5d756721 lbd_0 20480 00:30:16.978 16:36:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ac2ba396-6ca2-436d-b85b-d2ec46295211 00:30:16.978 16:36:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ac2ba396-6ca2-436d-b85b-d2ec46295211 lvs_n_0 00:30:17.542 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=e7940fca-72a5-449a-a54b-eeb1c335ca8c 00:30:17.542 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb e7940fca-72a5-449a-a54b-eeb1c335ca8c 00:30:17.542 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=e7940fca-72a5-449a-a54b-eeb1c335ca8c 00:30:17.543 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:17.543 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:17.543 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:17.543 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:17.801 { 00:30:17.801 "uuid": "0f5248f1-652f-4448-b026-ae0c5d756721", 00:30:17.801 "name": "lvs_0", 00:30:17.801 "base_bdev": "Nvme0n1", 00:30:17.801 "total_data_clusters": 238234, 00:30:17.801 "free_clusters": 233114, 00:30:17.801 "block_size": 512, 00:30:17.801 "cluster_size": 4194304 00:30:17.801 }, 00:30:17.801 { 00:30:17.801 "uuid": "e7940fca-72a5-449a-a54b-eeb1c335ca8c", 00:30:17.801 "name": "lvs_n_0", 00:30:17.801 "base_bdev": "ac2ba396-6ca2-436d-b85b-d2ec46295211", 00:30:17.801 "total_data_clusters": 5114, 00:30:17.801 "free_clusters": 5114, 00:30:17.801 "block_size": 512, 00:30:17.801 "cluster_size": 4194304 00:30:17.801 } 00:30:17.801 ]' 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="e7940fca-72a5-449a-a54b-eeb1c335ca8c") .free_clusters' 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="e7940fca-72a5-449a-a54b-eeb1c335ca8c") .cluster_size' 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:17.801 20456 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:17.801 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7940fca-72a5-449a-a54b-eeb1c335ca8c lbd_nest_0 20456 00:30:18.059 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=bb6f046b-7e39-4146-88a0-fa6733d8044e 00:30:18.059 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.318 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:18.318 16:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bb6f046b-7e39-4146-88a0-fa6733d8044e 00:30:18.576 16:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.834 16:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:18.834 16:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:18.834 16:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:18.834 16:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:18.834 16:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.032 Initializing NVMe Controllers 00:30:31.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.032 Initialization complete. Launching workers. 00:30:31.032 ======================================================== 00:30:31.032 Latency(us) 00:30:31.032 Device Information : IOPS MiB/s Average min max 00:30:31.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.10 0.02 23275.97 124.29 45886.99 00:30:31.032 ======================================================== 00:30:31.032 Total : 43.10 0.02 23275.97 124.29 45886.99 00:30:31.032 00:30:31.032 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:31.032 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.115 Initializing NVMe Controllers 00:30:41.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.115 Initialization complete. Launching workers. 00:30:41.115 ======================================================== 00:30:41.115 Latency(us) 00:30:41.115 Device Information : IOPS MiB/s Average min max 00:30:41.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.09 9.01 13882.11 4037.29 49877.02 00:30:41.115 ======================================================== 00:30:41.115 Total : 72.09 9.01 13882.11 4037.29 49877.02 00:30:41.115 00:30:41.115 16:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:41.115 16:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:41.115 16:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.092 Initializing NVMe Controllers 00:30:51.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.092 Initialization complete. Launching workers. 00:30:51.092 ======================================================== 00:30:51.092 Latency(us) 00:30:51.092 Device Information : IOPS MiB/s Average min max 00:30:51.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8578.98 4.19 3729.66 238.42 10055.43 00:30:51.092 ======================================================== 00:30:51.092 Total : 8578.98 4.19 3729.66 238.42 10055.43 00:30:51.092 00:30:51.092 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:51.092 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.069 Initializing NVMe Controllers 00:31:01.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.069 Initialization complete. Launching workers. 00:31:01.069 ======================================================== 00:31:01.069 Latency(us) 00:31:01.069 Device Information : IOPS MiB/s Average min max 00:31:01.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4406.57 550.82 7262.67 647.91 18225.67 00:31:01.069 ======================================================== 00:31:01.069 Total : 4406.57 550.82 7262.67 647.91 18225.67 00:31:01.069 00:31:01.069 16:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:01.069 16:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:01.070 16:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.046 Initializing NVMe Controllers 00:31:11.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.046 Controller IO queue size 128, less than required. 00:31:11.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.046 Initialization complete. Launching workers. 00:31:11.046 ======================================================== 00:31:11.046 Latency(us) 00:31:11.046 Device Information : IOPS MiB/s Average min max 00:31:11.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15863.28 7.75 8071.50 1373.48 22597.68 00:31:11.046 ======================================================== 00:31:11.046 Total : 15863.28 7.75 8071.50 1373.48 22597.68 00:31:11.046 00:31:11.046 16:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:11.046 16:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:23.249 Initializing NVMe Controllers 00:31:23.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.249 Controller IO queue size 128, less than required. 00:31:23.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:23.249 Initialization complete. Launching workers. 00:31:23.249 ======================================================== 00:31:23.249 Latency(us) 00:31:23.249 Device Information : IOPS MiB/s Average min max 00:31:23.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1239.00 154.87 104219.77 31429.62 227625.23 00:31:23.249 ======================================================== 00:31:23.249 Total : 1239.00 154.87 104219.77 31429.62 227625.23 00:31:23.249 00:31:23.249 16:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.249 16:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bb6f046b-7e39-4146-88a0-fa6733d8044e 00:31:23.249 16:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:23.249 16:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac2ba396-6ca2-436d-b85b-d2ec46295211 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.249 rmmod nvme_tcp 00:31:23.249 rmmod nvme_fabrics 00:31:23.249 rmmod nvme_keyring 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1119938 ']' 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1119938 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1119938 ']' 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1119938 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119938 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119938' 00:31:23.249 killing process with pid 1119938 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1119938 00:31:23.249 16:37:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1119938 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.625 16:37:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.528 00:31:26.528 real 1m33.698s 00:31:26.528 user 5m34.261s 00:31:26.528 sys 0m17.268s 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:26.528 ************************************ 00:31:26.528 END TEST nvmf_perf 00:31:26.528 ************************************ 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.528 ************************************ 00:31:26.528 START TEST nvmf_fio_host 00:31:26.528 ************************************ 00:31:26.528 16:37:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:26.528 * Looking for test storage... 00:31:26.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:26.528 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:26.528 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:26.528 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:26.787 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:26.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.788 --rc genhtml_branch_coverage=1 00:31:26.788 --rc genhtml_function_coverage=1 00:31:26.788 --rc genhtml_legend=1 00:31:26.788 --rc geninfo_all_blocks=1 00:31:26.788 --rc geninfo_unexecuted_blocks=1 00:31:26.788 00:31:26.788 ' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:26.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.788 --rc genhtml_branch_coverage=1 00:31:26.788 --rc genhtml_function_coverage=1 00:31:26.788 --rc genhtml_legend=1 00:31:26.788 --rc geninfo_all_blocks=1 00:31:26.788 --rc geninfo_unexecuted_blocks=1 00:31:26.788 00:31:26.788 ' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:26.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.788 --rc genhtml_branch_coverage=1 00:31:26.788 --rc genhtml_function_coverage=1 00:31:26.788 --rc genhtml_legend=1 00:31:26.788 --rc geninfo_all_blocks=1 00:31:26.788 --rc geninfo_unexecuted_blocks=1 00:31:26.788 00:31:26.788 ' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:26.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.788 --rc genhtml_branch_coverage=1 00:31:26.788 --rc genhtml_function_coverage=1 00:31:26.788 --rc genhtml_legend=1 00:31:26.788 --rc geninfo_all_blocks=1 00:31:26.788 --rc geninfo_unexecuted_blocks=1 00:31:26.788 00:31:26.788 ' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.788 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:26.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.789 16:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:33.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:33.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:33.357 Found net devices under 0000:af:00.0: cvl_0_0 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:33.357 Found net devices under 0000:af:00.1: cvl_0_1 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.357 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.358 16:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:31:33.358 00:31:33.358 --- 10.0.0.2 ping statistics --- 00:31:33.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.358 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:31:33.358 00:31:33.358 --- 10.0.0.1 ping statistics --- 00:31:33.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.358 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1137358 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1137358 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1137358 ']' 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.358 [2024-12-16 16:37:21.126818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:33.358 [2024-12-16 16:37:21.126866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.358 [2024-12-16 16:37:21.203631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.358 [2024-12-16 16:37:21.226974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.358 [2024-12-16 16:37:21.227012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.358 [2024-12-16 16:37:21.227019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.358 [2024-12-16 16:37:21.227025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.358 [2024-12-16 16:37:21.227030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.358 [2024-12-16 16:37:21.228343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.358 [2024-12-16 16:37:21.228450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.358 [2024-12-16 16:37:21.228560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.358 [2024-12-16 16:37:21.228561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:33.358 [2024-12-16 16:37:21.484720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:33.358 Malloc1 00:31:33.358 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.617 16:37:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.617 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.875 [2024-12-16 16:37:22.328279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.875 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.134 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.135 16:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.393 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:34.393 fio-3.35 00:31:34.393 Starting 1 thread 00:31:36.929 00:31:36.929 test: (groupid=0, jobs=1): err= 0: pid=1137782: Mon Dec 16 16:37:25 2024 00:31:36.929 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.9MiB/2005msec) 00:31:36.929 slat (nsec): min=1509, max=240614, avg=1693.89, stdev=2226.72 00:31:36.929 clat (usec): min=2708, max=10356, avg=5945.53, stdev=449.28 00:31:36.929 lat (usec): min=2745, max=10358, avg=5947.22, stdev=449.09 00:31:36.929 clat percentiles (usec): 00:31:36.929 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:31:36.929 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:31:36.929 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:31:36.929 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[ 9372], 00:31:36.929 | 99.99th=[10290] 00:31:36.929 bw ( KiB/s): min=46184, max=48112, per=99.96%, avg=47406.00, stdev=857.29, samples=4 00:31:36.929 iops : min=11546, max=12028, avg=11851.50, stdev=214.32, samples=4 00:31:36.929 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(92.4MiB/2005msec); 0 zone resets 00:31:36.929 slat (nsec): min=1558, max=150618, avg=1758.61, stdev=1202.81 00:31:36.929 clat (usec): min=2262, max=9338, avg=4801.61, stdev=369.42 00:31:36.929 lat (usec): min=2277, max=9340, avg=4803.37, stdev=369.28 00:31:36.929 clat percentiles (usec): 00:31:36.929 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:31:36.929 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:31:36.929 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:31:36.930 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 8455], 00:31:36.930 | 99.99th=[ 9241] 00:31:36.930 bw ( KiB/s): min=46728, max=47808, per=100.00%, avg=47204.00, stdev=464.85, samples=4 00:31:36.930 iops : min=11682, max=11952, avg=11801.00, stdev=116.21, samples=4 00:31:36.930 lat (msec) : 4=0.70%, 10=99.29%, 20=0.01% 00:31:36.930 cpu : usr=74.00%, sys=25.00%, ctx=75, majf=0, minf=3 00:31:36.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:36.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.930 issued rwts: total=23771,23661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.930 00:31:36.930 Run status group 0 (all jobs): 00:31:36.930 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec 00:31:36.930 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=92.4MiB (96.9MB), run=2005-2005msec 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:36.930 16:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.188 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:37.188 fio-3.35 00:31:37.188 Starting 1 thread 00:31:39.720 00:31:39.720 test: (groupid=0, jobs=1): err= 0: pid=1138347: Mon Dec 16 16:37:27 2024 00:31:39.720 read: IOPS=10.9k, BW=171MiB/s (179MB/s)(343MiB/2008msec) 00:31:39.720 slat (nsec): min=2466, max=85714, avg=2810.60, stdev=1413.70 00:31:39.720 clat (usec): min=1360, max=14325, avg=6727.32, stdev=1633.68 00:31:39.720 lat (usec): min=1363, max=14328, avg=6730.13, stdev=1633.83 00:31:39.720 clat percentiles (usec): 00:31:39.720 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5342], 00:31:39.720 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7177], 00:31:39.720 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9503], 00:31:39.720 | 99.00th=[11207], 99.50th=[11994], 99.90th=[13173], 99.95th=[13304], 00:31:39.721 | 99.99th=[14222] 00:31:39.721 bw ( KiB/s): min=82336, max=94208, per=51.07%, avg=89416.00, stdev=5650.36, samples=4 00:31:39.721 iops : min= 5146, max= 5888, avg=5588.50, stdev=353.15, samples=4 00:31:39.721 write: IOPS=6533, BW=102MiB/s (107MB/s)(183MiB/1793msec); 0 zone resets 00:31:39.721 slat (usec): min=28, max=387, avg=31.57, stdev= 7.66 00:31:39.721 clat (usec): min=3232, max=15674, avg=8598.58, stdev=1509.43 00:31:39.721 lat (usec): min=3262, max=15708, avg=8630.15, stdev=1511.23 00:31:39.721 clat percentiles (usec): 00:31:39.721 | 1.00th=[ 5932], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:39.721 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:31:39.721 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11207], 00:31:39.721 | 99.00th=[12649], 99.50th=[14091], 99.90th=[15008], 99.95th=[15401], 00:31:39.721 | 99.99th=[15664] 00:31:39.721 bw ( KiB/s): min=86592, max=98304, per=89.27%, avg=93312.00, stdev=5904.90, samples=4 00:31:39.721 iops : min= 5412, max= 6144, avg=5832.00, stdev=369.06, samples=4 00:31:39.721 lat (msec) : 2=0.05%, 4=2.10%, 10=89.09%, 20=8.75% 00:31:39.721 cpu : usr=85.80%, sys=13.30%, ctx=52, majf=0, minf=3 00:31:39.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:39.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.721 issued rwts: total=21972,11714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.721 00:31:39.721 Run status group 0 (all jobs): 00:31:39.721 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=343MiB (360MB), run=2008-2008msec 00:31:39.721 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=183MiB (192MB), run=1793-1793msec 00:31:39.721 16:37:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:39.721 16:37:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:43.004 Nvme0n1 00:31:43.004 16:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2089bc22-ef79-405d-a474-dd3e4e6a8b86 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2089bc22-ef79-405d-a474-dd3e4e6a8b86 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=2089bc22-ef79-405d-a474-dd3e4e6a8b86 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:45.532 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:45.790 { 00:31:45.790 "uuid": "2089bc22-ef79-405d-a474-dd3e4e6a8b86", 00:31:45.790 "name": "lvs_0", 00:31:45.790 "base_bdev": "Nvme0n1", 00:31:45.790 "total_data_clusters": 930, 00:31:45.790 "free_clusters": 930, 00:31:45.790 "block_size": 512, 00:31:45.790 "cluster_size": 1073741824 00:31:45.790 } 00:31:45.790 ]' 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2089bc22-ef79-405d-a474-dd3e4e6a8b86") .free_clusters' 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2089bc22-ef79-405d-a474-dd3e4e6a8b86") .cluster_size' 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:45.790 952320 00:31:45.790 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:46.357 65eba792-fec4-417f-a900-1152826e288e 00:31:46.357 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:46.357 16:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:46.615 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:46.873 16:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.131 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:47.131 fio-3.35 00:31:47.131 Starting 1 thread 00:31:49.664 00:31:49.664 test: (groupid=0, jobs=1): err= 0: pid=1140048: Mon Dec 16 16:37:38 2024 00:31:49.664 read: IOPS=8170, BW=31.9MiB/s (33.5MB/s)(64.0MiB/2006msec) 00:31:49.664 slat (nsec): min=1517, max=109248, avg=1666.79, stdev=1187.77 00:31:49.664 clat (usec): min=743, max=170017, avg=8619.50, stdev=10207.63 00:31:49.664 lat (usec): min=744, max=170039, avg=8621.16, stdev=10207.82 00:31:49.664 clat percentiles (msec): 00:31:49.664 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:49.664 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:31:49.664 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:49.664 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:31:49.664 | 99.99th=[ 171] 00:31:49.664 bw ( KiB/s): min=23416, max=36048, per=99.85%, avg=32632.00, stdev=6149.20, samples=4 00:31:49.664 iops : min= 5854, max= 9012, avg=8158.00, stdev=1537.32, samples=4 00:31:49.664 write: IOPS=8164, BW=31.9MiB/s (33.4MB/s)(64.0MiB/2006msec); 0 zone resets 00:31:49.664 slat (nsec): min=1552, max=113395, avg=1750.30, stdev=1200.11 00:31:49.664 clat (usec): min=347, max=168537, avg=7002.13, stdev=9532.75 00:31:49.664 lat (usec): min=348, max=168543, avg=7003.88, stdev=9532.98 00:31:49.664 clat percentiles (msec): 00:31:49.664 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:49.664 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:49.664 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:49.664 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 169], 99.95th=[ 169], 00:31:49.664 | 99.99th=[ 169] 00:31:49.664 bw ( KiB/s): min=24552, max=35456, per=99.98%, avg=32650.00, stdev=5399.26, samples=4 00:31:49.664 iops : min= 6138, max= 8864, avg=8162.50, stdev=1349.81, samples=4 00:31:49.664 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:49.664 lat (msec) : 2=0.04%, 4=0.24%, 10=99.17%, 20=0.13%, 250=0.39% 00:31:49.664 cpu : usr=69.48%, sys=29.68%, ctx=99, majf=0, minf=3 00:31:49.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:49.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.665 issued rwts: total=16390,16378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.665 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.665 00:31:49.665 Run status group 0 (all jobs): 00:31:49.665 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:31:49.665 WRITE: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:31:49.665 16:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:49.923 16:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e0f758e3-3a49-4fa7-ac0d-2afb89186528 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e0f758e3-3a49-4fa7-ac0d-2afb89186528 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=e0f758e3-3a49-4fa7-ac0d-2afb89186528 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:51.300 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:51.300 { 00:31:51.300 "uuid": "2089bc22-ef79-405d-a474-dd3e4e6a8b86", 00:31:51.301 "name": "lvs_0", 00:31:51.301 "base_bdev": "Nvme0n1", 00:31:51.301 "total_data_clusters": 930, 00:31:51.301 "free_clusters": 0, 00:31:51.301 "block_size": 512, 00:31:51.301 "cluster_size": 1073741824 00:31:51.301 }, 00:31:51.301 { 00:31:51.301 "uuid": "e0f758e3-3a49-4fa7-ac0d-2afb89186528", 00:31:51.301 "name": "lvs_n_0", 00:31:51.301 "base_bdev": "65eba792-fec4-417f-a900-1152826e288e", 00:31:51.301 "total_data_clusters": 237847, 00:31:51.301 "free_clusters": 237847, 00:31:51.301 "block_size": 512, 00:31:51.301 "cluster_size": 4194304 00:31:51.301 } 00:31:51.301 ]' 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="e0f758e3-3a49-4fa7-ac0d-2afb89186528") .free_clusters' 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="e0f758e3-3a49-4fa7-ac0d-2afb89186528") .cluster_size' 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:51.301 951388 00:31:51.301 16:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:51.868 9ac6890f-fbd6-499a-b7ae-d742a7f4a844 00:31:51.868 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:52.127 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:52.385 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:52.667 16:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:52.667 16:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:52.928 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:52.928 fio-3.35 00:31:52.928 Starting 1 thread 00:31:55.554 [2024-12-16 16:37:43.652657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c93d0 is same with the state(6) to be set 00:31:55.554 [2024-12-16 16:37:43.652699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c93d0 is same with the state(6) to be set 00:31:55.554 [2024-12-16 16:37:43.652708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c93d0 is same with the state(6) to be set 00:31:55.554 [2024-12-16 16:37:43.652714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c93d0 is same with the state(6) to be set 00:31:55.554 00:31:55.554 test: (groupid=0, jobs=1): err= 0: pid=1141061: Mon Dec 16 16:37:43 2024 00:31:55.554 read: IOPS=7890, BW=30.8MiB/s (32.3MB/s)(61.9MiB/2007msec) 00:31:55.554 slat (nsec): min=1491, max=100962, avg=1646.25, stdev=1127.64 00:31:55.554 clat (usec): min=3066, max=15057, avg=8924.35, stdev=804.11 00:31:55.554 lat (usec): min=3069, max=15059, avg=8926.00, stdev=804.05 00:31:55.554 clat percentiles (usec): 00:31:55.554 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8291], 00:31:55.554 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:31:55.554 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:31:55.554 | 99.00th=[10683], 99.50th=[10814], 99.90th=[14091], 99.95th=[14353], 00:31:55.554 | 99.99th=[15008] 00:31:55.554 bw ( KiB/s): min=30376, max=32056, per=99.89%, avg=31528.00, stdev=780.10, samples=4 00:31:55.554 iops : min= 7594, max= 8014, avg=7882.00, stdev=195.02, samples=4 00:31:55.554 write: IOPS=7864, BW=30.7MiB/s (32.2MB/s)(61.7MiB/2007msec); 0 zone resets 00:31:55.554 slat (nsec): min=1495, max=77468, avg=1718.95, stdev=824.23 00:31:55.554 clat (usec): min=1422, max=13542, avg=7236.02, stdev=653.97 00:31:55.554 lat (usec): min=1427, max=13544, avg=7237.74, stdev=653.94 00:31:55.554 clat percentiles (usec): 00:31:55.554 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:31:55.554 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:31:55.554 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:31:55.554 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11469], 99.95th=[12387], 00:31:55.554 | 99.99th=[13566] 00:31:55.554 bw ( KiB/s): min=31424, max=31552, per=100.00%, avg=31460.00, stdev=61.80, samples=4 00:31:55.554 iops : min= 7856, max= 7888, avg=7865.00, stdev=15.45, samples=4 00:31:55.554 lat (msec) : 2=0.01%, 4=0.11%, 10=95.99%, 20=3.90% 00:31:55.554 cpu : usr=71.19%, sys=28.02%, ctx=123, majf=0, minf=3 00:31:55.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:55.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.554 issued rwts: total=15837,15784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.554 00:31:55.554 Run status group 0 (all jobs): 00:31:55.554 READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.9MiB (64.9MB), run=2007-2007msec 00:31:55.554 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.7MiB (64.7MB), run=2007-2007msec 00:31:55.554 16:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:55.554 16:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:55.554 16:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:59.726 16:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:59.726 16:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:02.247 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:02.503 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.396 rmmod nvme_tcp 00:32:04.396 rmmod nvme_fabrics 00:32:04.396 rmmod nvme_keyring 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1137358 ']' 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1137358 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1137358 ']' 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1137358 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137358 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137358' 00:32:04.396 killing process with pid 1137358 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1137358 00:32:04.396 16:37:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1137358 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.655 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.561 00:32:06.561 real 0m40.106s 00:32:06.561 user 2m41.332s 00:32:06.561 sys 0m8.943s 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.561 ************************************ 00:32:06.561 END TEST nvmf_fio_host 00:32:06.561 ************************************ 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.561 ************************************ 00:32:06.561 START TEST nvmf_failover 00:32:06.561 ************************************ 00:32:06.561 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:06.820 * Looking for test storage... 00:32:06.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:06.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.820 --rc genhtml_branch_coverage=1 00:32:06.820 --rc genhtml_function_coverage=1 00:32:06.820 --rc genhtml_legend=1 00:32:06.820 --rc geninfo_all_blocks=1 00:32:06.820 --rc geninfo_unexecuted_blocks=1 00:32:06.820 00:32:06.820 ' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:06.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.820 --rc genhtml_branch_coverage=1 00:32:06.820 --rc genhtml_function_coverage=1 00:32:06.820 --rc genhtml_legend=1 00:32:06.820 --rc geninfo_all_blocks=1 00:32:06.820 --rc geninfo_unexecuted_blocks=1 00:32:06.820 00:32:06.820 ' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:06.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.820 --rc genhtml_branch_coverage=1 00:32:06.820 --rc genhtml_function_coverage=1 00:32:06.820 --rc genhtml_legend=1 00:32:06.820 --rc geninfo_all_blocks=1 00:32:06.820 --rc geninfo_unexecuted_blocks=1 00:32:06.820 00:32:06.820 ' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:06.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.820 --rc genhtml_branch_coverage=1 00:32:06.820 --rc genhtml_function_coverage=1 00:32:06.820 --rc genhtml_legend=1 00:32:06.820 --rc geninfo_all_blocks=1 00:32:06.820 --rc geninfo_unexecuted_blocks=1 00:32:06.820 00:32:06.820 ' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:06.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.820 16:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:13.390 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.390 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:13.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:13.391 Found net devices under 0000:af:00.0: cvl_0_0 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:13.391 Found net devices under 0000:af:00.1: cvl_0_1 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.391 16:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:32:13.391 00:32:13.391 --- 10.0.0.2 ping statistics --- 00:32:13.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.391 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:32:13.391 00:32:13.391 --- 10.0.0.1 ping statistics --- 00:32:13.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.391 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1146311 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1146311 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146311 ']' 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:13.391 [2024-12-16 16:38:01.309115] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:13.391 [2024-12-16 16:38:01.309158] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.391 [2024-12-16 16:38:01.389840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:13.391 [2024-12-16 16:38:01.411624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.391 [2024-12-16 16:38:01.411660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.391 [2024-12-16 16:38:01.411666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.391 [2024-12-16 16:38:01.411672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.391 [2024-12-16 16:38:01.411677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.391 [2024-12-16 16:38:01.412988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.391 [2024-12-16 16:38:01.413113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.391 [2024-12-16 16:38:01.413110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:13.391 [2024-12-16 16:38:01.704245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:13.391 Malloc0 00:32:13.391 16:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:13.649 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:13.906 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.163 [2024-12-16 16:38:02.540409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.163 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:14.163 [2024-12-16 16:38:02.736934] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:14.163 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:14.420 [2024-12-16 16:38:02.925558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1146559 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1146559 /var/tmp/bdevperf.sock 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146559 ']' 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:14.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.420 16:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:14.677 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.677 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:14.677 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:15.241 NVMe0n1 00:32:15.241 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:15.498 00:32:15.498 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1146784 00:32:15.498 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:15.498 16:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:16.429 16:38:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.693 16:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:19.967 16:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.224 00:32:20.224 16:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:20.482 [2024-12-16 16:38:08.845143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 [2024-12-16 16:38:08.845562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98e0 is same with the state(6) to be set 00:32:20.482 16:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:23.757 16:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.757 [2024-12-16 16:38:12.054996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.757 16:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:24.688 16:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:24.688 [2024-12-16 16:38:13.272693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.688 [2024-12-16 16:38:13.272868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.272999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.689 [2024-12-16 16:38:13.273063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaa690 is same with the state(6) to be set 00:32:24.946 16:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1146784 00:32:31.503 { 00:32:31.503 "results": [ 00:32:31.503 { 00:32:31.503 "job": "NVMe0n1", 00:32:31.503 "core_mask": "0x1", 00:32:31.503 "workload": "verify", 00:32:31.503 "status": "finished", 00:32:31.503 "verify_range": { 00:32:31.503 "start": 0, 00:32:31.503 "length": 16384 00:32:31.503 }, 00:32:31.503 "queue_depth": 128, 00:32:31.503 "io_size": 4096, 00:32:31.503 "runtime": 15.009209, 00:32:31.503 "iops": 11208.518716742501, 00:32:31.503 "mibps": 43.783276237275395, 00:32:31.503 "io_failed": 12237, 00:32:31.503 "io_timeout": 0, 00:32:31.503 "avg_latency_us": 10623.702026793828, 00:32:31.503 "min_latency_us": 425.2038095238095, 00:32:31.503 "max_latency_us": 30208.975238095238 00:32:31.503 } 00:32:31.503 ], 00:32:31.503 "core_count": 1 00:32:31.503 } 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1146559 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146559 ']' 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146559 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146559 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:31.503 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:31.504 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146559' 00:32:31.504 killing process with pid 1146559 00:32:31.504 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146559 00:32:31.504 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146559 00:32:31.504 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:31.504 [2024-12-16 16:38:03.000729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:31.504 [2024-12-16 16:38:03.000778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146559 ] 00:32:31.504 [2024-12-16 16:38:03.077960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.504 [2024-12-16 16:38:03.100382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.504 Running I/O for 15 seconds... 00:32:31.504 11233.00 IOPS, 43.88 MiB/s [2024-12-16T15:38:20.113Z] [2024-12-16 16:38:05.181593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.181983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.181991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.504 [2024-12-16 16:38:05.182191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.504 [2024-12-16 16:38:05.182201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.505 [2024-12-16 16:38:05.182728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.505 [2024-12-16 16:38:05.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.505 [2024-12-16 16:38:05.182810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.182987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.182996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.506 [2024-12-16 16:38:05.183431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.506 [2024-12-16 16:38:05.183439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:05.183583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67b30 is same with the state(6) to be set 00:32:31.507 [2024-12-16 16:38:05.183600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.507 [2024-12-16 16:38:05.183606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.507 [2024-12-16 16:38:05.183612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:32:31.507 [2024-12-16 16:38:05.183620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183663] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:31.507 [2024-12-16 16:38:05.183686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.507 [2024-12-16 16:38:05.183694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.507 [2024-12-16 16:38:05.183708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.507 [2024-12-16 16:38:05.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.507 [2024-12-16 16:38:05.183739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:05.183752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:31.507 [2024-12-16 16:38:05.186541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:31.507 [2024-12-16 16:38:05.186570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a43460 (9): Bad file descriptor 00:32:31.507 [2024-12-16 16:38:05.216180] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:31.507 11209.00 IOPS, 43.79 MiB/s [2024-12-16T15:38:20.116Z] 11336.00 IOPS, 44.28 MiB/s [2024-12-16T15:38:20.116Z] 11368.00 IOPS, 44.41 MiB/s [2024-12-16T15:38:20.116Z] [2024-12-16 16:38:08.847732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:08.847766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.507 [2024-12-16 16:38:08.847791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.847979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.847992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.507 [2024-12-16 16:38:08.848105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.507 [2024-12-16 16:38:08.848113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.508 [2024-12-16 16:38:08.848613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.508 [2024-12-16 16:38:08.848621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.509 [2024-12-16 16:38:08.848765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53352 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53360 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53368 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53376 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53384 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53392 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53400 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53408 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.848978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.848983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53416 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.848989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.848996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53424 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53432 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53440 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53448 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53456 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53464 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53472 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53480 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.509 [2024-12-16 16:38:08.849197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53488 len:8 PRP1 0x0 PRP2 0x0 00:32:31.509 [2024-12-16 16:38:08.849203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.509 [2024-12-16 16:38:08.849210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.509 [2024-12-16 16:38:08.849214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53496 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53504 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53512 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53520 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53528 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53536 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53544 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53552 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53560 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53568 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53576 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53584 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53592 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53600 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53608 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53616 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53624 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53632 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.849648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.849654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53640 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.849660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.849670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.860114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.860126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53648 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.860139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.860149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.860156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.860164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53656 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.860175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.860184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.860191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.860198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53664 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.860207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.860216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.860223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.860230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53672 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.860239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.860249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.860256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.860263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53680 len:8 PRP1 0x0 PRP2 0x0 00:32:31.510 [2024-12-16 16:38:08.860272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.510 [2024-12-16 16:38:08.860281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.510 [2024-12-16 16:38:08.860288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.510 [2024-12-16 16:38:08.860295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53688 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53696 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53704 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53712 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52712 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52720 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52728 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52736 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52744 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52752 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52760 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52768 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52776 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52784 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52792 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52800 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52808 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52816 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.511 [2024-12-16 16:38:08.860863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.511 [2024-12-16 16:38:08.860870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52824 len:8 PRP1 0x0 PRP2 0x0 00:32:31.511 [2024-12-16 16:38:08.860879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860926] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:31.511 [2024-12-16 16:38:08.860953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.511 [2024-12-16 16:38:08.860965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.511 [2024-12-16 16:38:08.860985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.860994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.511 [2024-12-16 16:38:08.861003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.861013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.511 [2024-12-16 16:38:08.861022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:08.861030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:31.511 [2024-12-16 16:38:08.861058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a43460 (9): Bad file descriptor 00:32:31.511 [2024-12-16 16:38:08.864803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:31.511 [2024-12-16 16:38:09.019786] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:31.511 10960.20 IOPS, 42.81 MiB/s [2024-12-16T15:38:20.120Z] 11026.83 IOPS, 43.07 MiB/s [2024-12-16T15:38:20.120Z] 11103.43 IOPS, 43.37 MiB/s [2024-12-16T15:38:20.120Z] 11137.50 IOPS, 43.51 MiB/s [2024-12-16T15:38:20.120Z] 11175.44 IOPS, 43.65 MiB/s [2024-12-16T15:38:20.120Z] [2024-12-16 16:38:13.274870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.511 [2024-12-16 16:38:13.274905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:13.274920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.511 [2024-12-16 16:38:13.274927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.511 [2024-12-16 16:38:13.274936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.274943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.274951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.274958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.274966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.274977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.274985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.274992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.512 [2024-12-16 16:38:13.275491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.512 [2024-12-16 16:38:13.275498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.275986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.275994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.513 [2024-12-16 16:38:13.276091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.513 [2024-12-16 16:38:13.276103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.514 [2024-12-16 16:38:13.276348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108592 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.514 [2024-12-16 16:38:13.276426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.514 [2024-12-16 16:38:13.276440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.514 [2024-12-16 16:38:13.276454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.514 [2024-12-16 16:38:13.276467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43460 is same with the state(6) to be set 00:32:31.514 [2024-12-16 16:38:13.276642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108600 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108608 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108616 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108624 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108632 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108640 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108648 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108656 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108672 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.514 [2024-12-16 16:38:13.276886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.514 [2024-12-16 16:38:13.276890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.514 [2024-12-16 16:38:13.276896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108680 len:8 PRP1 0x0 PRP2 0x0 00:32:31.514 [2024-12-16 16:38:13.276902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.276909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.276914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.276921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.276927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.276934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.276938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.276943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.276949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.276956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.276961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.276967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.276973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.276979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.276984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.276989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108712 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.276995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.277008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.277013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108720 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.277020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.277032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.277037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.277043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.277055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.277060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.277067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.277079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.277084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.277090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.277106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.277113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.277119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.277131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.277137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108760 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.277142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.277149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108768 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108776 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108784 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108792 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108816 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108832 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107816 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107824 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107832 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107840 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.515 [2024-12-16 16:38:13.287704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.515 [2024-12-16 16:38:13.287711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107848 len:8 PRP1 0x0 PRP2 0x0 00:32:31.515 [2024-12-16 16:38:13.287719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.515 [2024-12-16 16:38:13.287728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107856 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107864 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107872 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107880 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107888 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107896 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107904 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107912 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.287967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.287976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.287984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.287993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107920 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107928 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107936 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107944 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107952 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107960 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107968 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107976 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107984 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107992 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108000 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108008 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108016 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.516 [2024-12-16 16:38:13.288393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.516 [2024-12-16 16:38:13.288400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.516 [2024-12-16 16:38:13.288407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108024 len:8 PRP1 0x0 PRP2 0x0 00:32:31.516 [2024-12-16 16:38:13.288416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108032 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108040 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108048 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108056 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108064 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108072 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108080 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108088 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108096 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108104 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108112 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108120 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108128 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108136 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108144 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108152 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108160 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.288972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108168 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.288981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.288990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.288996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.289003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108176 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.289011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.289020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.289027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.289034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108184 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.289042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.289050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.289056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.289063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108192 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.289072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.289080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.289086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.289096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108200 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.289106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.289116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.289123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.289130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108208 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.289139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.517 [2024-12-16 16:38:13.289147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.517 [2024-12-16 16:38:13.289153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.517 [2024-12-16 16:38:13.289162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108216 len:8 PRP1 0x0 PRP2 0x0 00:32:31.517 [2024-12-16 16:38:13.289171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.289180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.289187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.289193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108224 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.289201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.289210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.289219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.289226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108232 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.289234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.289242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.289248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.289257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108240 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.289266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.289274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.289280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108248 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.296746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.296755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108256 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.296776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.296789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.296804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108264 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.296826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.296840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.296849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108272 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.296870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.296882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.296891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108280 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.296912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.296924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.296934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108288 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.296957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.296969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.296981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.296991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108312 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108320 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108328 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108336 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108344 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108352 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108360 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108368 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108376 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108384 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108392 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108400 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.518 [2024-12-16 16:38:13.297595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.518 [2024-12-16 16:38:13.297605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108408 len:8 PRP1 0x0 PRP2 0x0 00:32:31.518 [2024-12-16 16:38:13.297617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.518 [2024-12-16 16:38:13.297630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108416 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108424 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108432 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108440 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108448 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108456 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108464 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.297957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108472 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.297969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.297982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.297992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108480 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108488 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108496 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108504 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108512 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108520 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108536 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108544 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108552 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108560 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108568 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108576 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108584 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:31.519 [2024-12-16 16:38:13.298619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:31.519 [2024-12-16 16:38:13.298629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108592 len:8 PRP1 0x0 PRP2 0x0 00:32:31.519 [2024-12-16 16:38:13.298641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.519 [2024-12-16 16:38:13.298697] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:31.519 [2024-12-16 16:38:13.298713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:31.519 [2024-12-16 16:38:13.298763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a43460 (9): Bad file descriptor 00:32:31.519 [2024-12-16 16:38:13.303939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:31.520 [2024-12-16 16:38:13.371396] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:31.520 11085.50 IOPS, 43.30 MiB/s [2024-12-16T15:38:20.129Z] 11118.36 IOPS, 43.43 MiB/s [2024-12-16T15:38:20.129Z] 11144.00 IOPS, 43.53 MiB/s [2024-12-16T15:38:20.129Z] 11166.54 IOPS, 43.62 MiB/s [2024-12-16T15:38:20.129Z] 11198.29 IOPS, 43.74 MiB/s 00:32:31.520 Latency(us) 00:32:31.520 [2024-12-16T15:38:20.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:31.520 Verification LBA range: start 0x0 length 0x4000 00:32:31.520 NVMe0n1 : 15.01 11208.52 43.78 815.30 0.00 10623.70 425.20 30208.98 00:32:31.520 [2024-12-16T15:38:20.129Z] =================================================================================================================== 00:32:31.520 [2024-12-16T15:38:20.129Z] Total : 11208.52 43.78 815.30 0.00 10623.70 425.20 30208.98 00:32:31.520 Received shutdown signal, test time was about 15.000000 seconds 00:32:31.520 00:32:31.520 Latency(us) 00:32:31.520 [2024-12-16T15:38:20.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.520 [2024-12-16T15:38:20.129Z] =================================================================================================================== 00:32:31.520 [2024-12-16T15:38:20.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1149190 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1149190 /var/tmp/bdevperf.sock 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1149190 ']' 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:31.520 [2024-12-16 16:38:19.761294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:31.520 [2024-12-16 16:38:19.949837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:31.520 16:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:31.776 NVMe0n1 00:32:31.776 16:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:32.339 00:32:32.339 16:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:32.596 00:32:32.596 16:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:32.596 16:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:32.852 16:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:32.852 16:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:36.123 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:36.123 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:36.123 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1149920 00:32:36.123 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:36.123 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1149920 00:32:37.498 { 00:32:37.498 "results": [ 00:32:37.498 { 00:32:37.498 "job": "NVMe0n1", 00:32:37.498 "core_mask": "0x1", 00:32:37.498 "workload": "verify", 00:32:37.498 "status": "finished", 00:32:37.498 "verify_range": { 00:32:37.498 "start": 0, 00:32:37.498 "length": 16384 00:32:37.498 }, 00:32:37.498 "queue_depth": 128, 00:32:37.498 "io_size": 4096, 00:32:37.498 "runtime": 1.004254, 00:32:37.498 "iops": 11448.298936324874, 00:32:37.498 "mibps": 44.71991772001904, 00:32:37.498 "io_failed": 0, 00:32:37.498 "io_timeout": 0, 00:32:37.498 "avg_latency_us": 11138.451391625973, 00:32:37.498 "min_latency_us": 1185.8895238095238, 00:32:37.498 "max_latency_us": 8987.794285714286 00:32:37.498 } 00:32:37.498 ], 00:32:37.498 "core_count": 1 00:32:37.498 } 00:32:37.498 16:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:37.498 [2024-12-16 16:38:19.401353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:37.498 [2024-12-16 16:38:19.401414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149190 ] 00:32:37.498 [2024-12-16 16:38:19.478714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.498 [2024-12-16 16:38:19.498559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.498 [2024-12-16 16:38:21.367126] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:37.498 [2024-12-16 16:38:21.367174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.498 [2024-12-16 16:38:21.367185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.498 [2024-12-16 16:38:21.367195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.498 [2024-12-16 16:38:21.367202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.498 [2024-12-16 16:38:21.367209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.498 [2024-12-16 16:38:21.367216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.498 [2024-12-16 16:38:21.367223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.498 [2024-12-16 16:38:21.367229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.498 [2024-12-16 16:38:21.367237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:37.498 [2024-12-16 16:38:21.367264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:37.498 [2024-12-16 16:38:21.367278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36460 (9): Bad file descriptor 00:32:37.498 [2024-12-16 16:38:21.419406] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:37.498 Running I/O for 1 seconds... 00:32:37.498 11369.00 IOPS, 44.41 MiB/s 00:32:37.498 Latency(us) 00:32:37.498 [2024-12-16T15:38:26.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:37.498 Verification LBA range: start 0x0 length 0x4000 00:32:37.498 NVMe0n1 : 1.00 11448.30 44.72 0.00 0.00 11138.45 1185.89 8987.79 00:32:37.498 [2024-12-16T15:38:26.107Z] =================================================================================================================== 00:32:37.498 [2024-12-16T15:38:26.107Z] Total : 11448.30 44.72 0.00 0.00 11138.45 1185.89 8987.79 00:32:37.498 16:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:37.498 16:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:37.498 16:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.754 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:37.754 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:37.754 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:38.011 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1149190 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1149190 ']' 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1149190 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149190 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149190' 00:32:41.282 killing process with pid 1149190 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1149190 00:32:41.282 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1149190 00:32:41.539 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:41.539 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.539 rmmod nvme_tcp 00:32:41.796 rmmod nvme_fabrics 00:32:41.796 rmmod nvme_keyring 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1146311 ']' 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1146311 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146311 ']' 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146311 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146311 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146311' 00:32:41.796 killing process with pid 1146311 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146311 00:32:41.796 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146311 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.054 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.957 00:32:43.957 real 0m37.358s 00:32:43.957 user 1m58.353s 00:32:43.957 sys 0m7.910s 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:43.957 ************************************ 00:32:43.957 END TEST nvmf_failover 00:32:43.957 ************************************ 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.957 16:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.217 ************************************ 00:32:44.217 START TEST nvmf_host_discovery 00:32:44.217 ************************************ 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:44.217 * Looking for test storage... 00:32:44.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:44.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.217 --rc genhtml_branch_coverage=1 00:32:44.217 --rc genhtml_function_coverage=1 00:32:44.217 --rc genhtml_legend=1 00:32:44.217 --rc geninfo_all_blocks=1 00:32:44.217 --rc geninfo_unexecuted_blocks=1 00:32:44.217 00:32:44.217 ' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:44.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.217 --rc genhtml_branch_coverage=1 00:32:44.217 --rc genhtml_function_coverage=1 00:32:44.217 --rc genhtml_legend=1 00:32:44.217 --rc geninfo_all_blocks=1 00:32:44.217 --rc geninfo_unexecuted_blocks=1 00:32:44.217 00:32:44.217 ' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:44.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.217 --rc genhtml_branch_coverage=1 00:32:44.217 --rc genhtml_function_coverage=1 00:32:44.217 --rc genhtml_legend=1 00:32:44.217 --rc geninfo_all_blocks=1 00:32:44.217 --rc geninfo_unexecuted_blocks=1 00:32:44.217 00:32:44.217 ' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:44.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.217 --rc genhtml_branch_coverage=1 00:32:44.217 --rc genhtml_function_coverage=1 00:32:44.217 --rc genhtml_legend=1 00:32:44.217 --rc geninfo_all_blocks=1 00:32:44.217 --rc geninfo_unexecuted_blocks=1 00:32:44.217 00:32:44.217 ' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:44.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.217 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.218 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.786 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.786 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.786 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.786 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.786 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:50.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:50.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:50.787 Found net devices under 0000:af:00.0: cvl_0_0 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:50.787 Found net devices under 0000:af:00.1: cvl_0_1 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:32:50.787 00:32:50.787 --- 10.0.0.2 ping statistics --- 00:32:50.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.787 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:32:50.787 00:32:50.787 --- 10.0.0.1 ping statistics --- 00:32:50.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.787 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.787 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1154286 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1154286 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154286 ']' 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.788 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 [2024-12-16 16:38:38.821667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:50.788 [2024-12-16 16:38:38.821713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.788 [2024-12-16 16:38:38.898861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.788 [2024-12-16 16:38:38.918873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.788 [2024-12-16 16:38:38.918906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.788 [2024-12-16 16:38:38.918912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.788 [2024-12-16 16:38:38.918919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.788 [2024-12-16 16:38:38.918924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.788 [2024-12-16 16:38:38.919411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 [2024-12-16 16:38:39.061189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 [2024-12-16 16:38:39.073362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 null0 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 null1 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1154316 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1154316 /tmp/host.sock 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154316 ']' 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:50.788 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 [2024-12-16 16:38:39.150721] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:50.788 [2024-12-16 16:38:39.150762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154316 ] 00:32:50.788 [2024-12-16 16:38:39.223996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.788 [2024-12-16 16:38:39.247374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.788 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.047 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.048 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.307 [2024-12-16 16:38:39.666873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:51.307 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:51.872 [2024-12-16 16:38:40.400602] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:51.872 [2024-12-16 16:38:40.400626] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:51.872 [2024-12-16 16:38:40.400638] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:52.150 [2024-12-16 16:38:40.527000] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:52.150 [2024-12-16 16:38:40.628621] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:52.150 [2024-12-16 16:38:40.629237] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x111ac60:1 started. 00:32:52.150 [2024-12-16 16:38:40.630599] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:52.150 [2024-12-16 16:38:40.630614] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:52.150 [2024-12-16 16:38:40.637946] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x111ac60 was disconnected and freed. delete nvme_qpair. 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.408 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.409 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:52.667 [2024-12-16 16:38:41.091144] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x111afe0:1 started. 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.667 [2024-12-16 16:38:41.099043] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x111afe0 was disconnected and freed. delete nvme_qpair. 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.667 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.668 [2024-12-16 16:38:41.186953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:52.668 [2024-12-16 16:38:41.187462] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:52.668 [2024-12-16 16:38:41.187481] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.668 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.926 [2024-12-16 16:38:41.314850] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:52.926 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:52.927 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:52.927 [2024-12-16 16:38:41.374442] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:52.927 [2024-12-16 16:38:41.374476] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:52.927 [2024-12-16 16:38:41.374483] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:52.927 [2024-12-16 16:38:41.374488] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:53.862 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.863 [2024-12-16 16:38:42.451399] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:53.863 [2024-12-16 16:38:42.451421] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:53.863 [2024-12-16 16:38:42.453242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.863 [2024-12-16 16:38:42.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.863 [2024-12-16 16:38:42.453269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.863 [2024-12-16 16:38:42.453276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.863 [2024-12-16 16:38:42.453283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.863 [2024-12-16 16:38:42.453290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.863 [2024-12-16 16:38:42.453297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.863 [2024-12-16 16:38:42.453307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.863 [2024-12-16 16:38:42.453313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.863 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:53.863 [2024-12-16 16:38:42.463254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.122 [2024-12-16 16:38:42.473290] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.122 [2024-12-16 16:38:42.473302] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.122 [2024-12-16 16:38:42.473309] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.122 [2024-12-16 16:38:42.473317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.122 [2024-12-16 16:38:42.473335] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.122 [2024-12-16 16:38:42.473580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.122 [2024-12-16 16:38:42.473596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.122 [2024-12-16 16:38:42.473604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.122 [2024-12-16 16:38:42.473617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.123 [2024-12-16 16:38:42.473634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.123 [2024-12-16 16:38:42.473642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.123 [2024-12-16 16:38:42.473650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.123 [2024-12-16 16:38:42.473656] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.123 [2024-12-16 16:38:42.473661] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.123 [2024-12-16 16:38:42.473666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.123 [2024-12-16 16:38:42.483366] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.123 [2024-12-16 16:38:42.483376] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.123 [2024-12-16 16:38:42.483380] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.483384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.123 [2024-12-16 16:38:42.483399] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.483574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.123 [2024-12-16 16:38:42.483586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.123 [2024-12-16 16:38:42.483593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.123 [2024-12-16 16:38:42.483604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.123 [2024-12-16 16:38:42.483613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.123 [2024-12-16 16:38:42.483619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.123 [2024-12-16 16:38:42.483626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.123 [2024-12-16 16:38:42.483632] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.123 [2024-12-16 16:38:42.483637] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.123 [2024-12-16 16:38:42.483640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.123 [2024-12-16 16:38:42.493429] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.123 [2024-12-16 16:38:42.493440] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.123 [2024-12-16 16:38:42.493443] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.493448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.123 [2024-12-16 16:38:42.493461] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.493660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.123 [2024-12-16 16:38:42.493673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.123 [2024-12-16 16:38:42.493680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.123 [2024-12-16 16:38:42.493690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.123 [2024-12-16 16:38:42.493727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.123 [2024-12-16 16:38:42.493735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.123 [2024-12-16 16:38:42.493742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.123 [2024-12-16 16:38:42.493747] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.123 [2024-12-16 16:38:42.493752] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.123 [2024-12-16 16:38:42.493755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.123 [2024-12-16 16:38:42.503493] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.123 [2024-12-16 16:38:42.503506] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.123 [2024-12-16 16:38:42.503511] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.503515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.123 [2024-12-16 16:38:42.503530] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.503758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.123 [2024-12-16 16:38:42.503773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.123 [2024-12-16 16:38:42.503781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.123 [2024-12-16 16:38:42.503792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.123 [2024-12-16 16:38:42.503808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.123 [2024-12-16 16:38:42.503815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.123 [2024-12-16 16:38:42.503821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.123 [2024-12-16 16:38:42.503827] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.123 [2024-12-16 16:38:42.503831] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.123 [2024-12-16 16:38:42.503835] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.123 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:54.123 [2024-12-16 16:38:42.513561] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.123 [2024-12-16 16:38:42.513574] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.123 [2024-12-16 16:38:42.513577] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.513585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.123 [2024-12-16 16:38:42.513599] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.513704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.123 [2024-12-16 16:38:42.513716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.123 [2024-12-16 16:38:42.513724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.123 [2024-12-16 16:38:42.513734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.123 [2024-12-16 16:38:42.513744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.123 [2024-12-16 16:38:42.513749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.123 [2024-12-16 16:38:42.513757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.123 [2024-12-16 16:38:42.513762] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.123 [2024-12-16 16:38:42.513767] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.123 [2024-12-16 16:38:42.513770] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.123 [2024-12-16 16:38:42.523630] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.123 [2024-12-16 16:38:42.523644] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.123 [2024-12-16 16:38:42.523648] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.523652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.123 [2024-12-16 16:38:42.523667] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.123 [2024-12-16 16:38:42.523916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.123 [2024-12-16 16:38:42.523930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.123 [2024-12-16 16:38:42.523937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.123 [2024-12-16 16:38:42.523949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.123 [2024-12-16 16:38:42.523960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.123 [2024-12-16 16:38:42.523967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.123 [2024-12-16 16:38:42.523974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.124 [2024-12-16 16:38:42.523980] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.124 [2024-12-16 16:38:42.523984] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.124 [2024-12-16 16:38:42.523988] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.124 [2024-12-16 16:38:42.533697] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:54.124 [2024-12-16 16:38:42.533707] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:54.124 [2024-12-16 16:38:42.533714] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:54.124 [2024-12-16 16:38:42.533718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:54.124 [2024-12-16 16:38:42.533731] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:54.124 [2024-12-16 16:38:42.533899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.124 [2024-12-16 16:38:42.533911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd70 with addr=10.0.0.2, port=4420 00:32:54.124 [2024-12-16 16:38:42.533919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd70 is same with the state(6) to be set 00:32:54.124 [2024-12-16 16:38:42.533930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd70 (9): Bad file descriptor 00:32:54.124 [2024-12-16 16:38:42.533940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:54.124 [2024-12-16 16:38:42.533947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:54.124 [2024-12-16 16:38:42.533953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:54.124 [2024-12-16 16:38:42.533959] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:54.124 [2024-12-16 16:38:42.533963] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:54.124 [2024-12-16 16:38:42.533967] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:54.124 [2024-12-16 16:38:42.538073] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:54.124 [2024-12-16 16:38:42.538088] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:54.124 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.405 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.441 [2024-12-16 16:38:43.869241] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:55.441 [2024-12-16 16:38:43.869262] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:55.441 [2024-12-16 16:38:43.869275] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:55.441 [2024-12-16 16:38:43.957527] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:55.700 [2024-12-16 16:38:44.222684] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:55.700 [2024-12-16 16:38:44.223273] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1126d60:1 started. 00:32:55.700 [2024-12-16 16:38:44.224825] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:55.700 [2024-12-16 16:38:44.224849] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:55.700 [2024-12-16 16:38:44.227069] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1126d60 was disconnected and freed. delete nvme_qpair. 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.700 request: 00:32:55.700 { 00:32:55.700 "name": "nvme", 00:32:55.700 "trtype": "tcp", 00:32:55.700 "traddr": "10.0.0.2", 00:32:55.700 "adrfam": "ipv4", 00:32:55.700 "trsvcid": "8009", 00:32:55.700 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:55.700 "wait_for_attach": true, 00:32:55.700 "method": "bdev_nvme_start_discovery", 00:32:55.700 "req_id": 1 00:32:55.700 } 00:32:55.700 Got JSON-RPC error response 00:32:55.700 response: 00:32:55.700 { 00:32:55.700 "code": -17, 00:32:55.700 "message": "File exists" 00:32:55.700 } 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:55.700 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:55.701 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:55.701 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.701 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:55.701 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.701 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.701 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.959 request: 00:32:55.959 { 00:32:55.959 "name": "nvme_second", 00:32:55.959 "trtype": "tcp", 00:32:55.959 "traddr": "10.0.0.2", 00:32:55.959 "adrfam": "ipv4", 00:32:55.959 "trsvcid": "8009", 00:32:55.959 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:55.959 "wait_for_attach": true, 00:32:55.959 "method": "bdev_nvme_start_discovery", 00:32:55.959 "req_id": 1 00:32:55.959 } 00:32:55.959 Got JSON-RPC error response 00:32:55.959 response: 00:32:55.959 { 00:32:55.959 "code": -17, 00:32:55.959 "message": "File exists" 00:32:55.959 } 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.959 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.960 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.896 [2024-12-16 16:38:45.464557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.896 [2024-12-16 16:38:45.464584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124660 with addr=10.0.0.2, port=8010 00:32:56.896 [2024-12-16 16:38:45.464596] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:56.897 [2024-12-16 16:38:45.464603] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:56.897 [2024-12-16 16:38:45.464609] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:58.271 [2024-12-16 16:38:46.466964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.271 [2024-12-16 16:38:46.466987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1124660 with addr=10.0.0.2, port=8010 00:32:58.271 [2024-12-16 16:38:46.466999] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:58.271 [2024-12-16 16:38:46.467005] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:58.271 [2024-12-16 16:38:46.467012] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:59.206 [2024-12-16 16:38:47.469165] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:59.206 request: 00:32:59.206 { 00:32:59.206 "name": "nvme_second", 00:32:59.206 "trtype": "tcp", 00:32:59.206 "traddr": "10.0.0.2", 00:32:59.206 "adrfam": "ipv4", 00:32:59.206 "trsvcid": "8010", 00:32:59.206 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:59.206 "wait_for_attach": false, 00:32:59.206 "attach_timeout_ms": 3000, 00:32:59.206 "method": "bdev_nvme_start_discovery", 00:32:59.206 "req_id": 1 00:32:59.206 } 00:32:59.206 Got JSON-RPC error response 00:32:59.206 response: 00:32:59.206 { 00:32:59.206 "code": -110, 00:32:59.206 "message": "Connection timed out" 00:32:59.206 } 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1154316 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.206 rmmod nvme_tcp 00:32:59.206 rmmod nvme_fabrics 00:32:59.206 rmmod nvme_keyring 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1154286 ']' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1154286 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1154286 ']' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1154286 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154286 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154286' 00:32:59.206 killing process with pid 1154286 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1154286 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1154286 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.206 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.207 16:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.743 00:33:01.743 real 0m17.282s 00:33:01.743 user 0m20.636s 00:33:01.743 sys 0m5.770s 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.743 ************************************ 00:33:01.743 END TEST nvmf_host_discovery 00:33:01.743 ************************************ 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.743 ************************************ 00:33:01.743 START TEST nvmf_host_multipath_status 00:33:01.743 ************************************ 00:33:01.743 16:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:01.743 * Looking for test storage... 00:33:01.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:01.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.743 --rc genhtml_branch_coverage=1 00:33:01.743 --rc genhtml_function_coverage=1 00:33:01.743 --rc genhtml_legend=1 00:33:01.743 --rc geninfo_all_blocks=1 00:33:01.743 --rc geninfo_unexecuted_blocks=1 00:33:01.743 00:33:01.743 ' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:01.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.743 --rc genhtml_branch_coverage=1 00:33:01.743 --rc genhtml_function_coverage=1 00:33:01.743 --rc genhtml_legend=1 00:33:01.743 --rc geninfo_all_blocks=1 00:33:01.743 --rc geninfo_unexecuted_blocks=1 00:33:01.743 00:33:01.743 ' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:01.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.743 --rc genhtml_branch_coverage=1 00:33:01.743 --rc genhtml_function_coverage=1 00:33:01.743 --rc genhtml_legend=1 00:33:01.743 --rc geninfo_all_blocks=1 00:33:01.743 --rc geninfo_unexecuted_blocks=1 00:33:01.743 00:33:01.743 ' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:01.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.743 --rc genhtml_branch_coverage=1 00:33:01.743 --rc genhtml_function_coverage=1 00:33:01.743 --rc genhtml_legend=1 00:33:01.743 --rc geninfo_all_blocks=1 00:33:01.743 --rc geninfo_unexecuted_blocks=1 00:33:01.743 00:33:01.743 ' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.743 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.744 16:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:08.310 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:08.310 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.310 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:08.311 Found net devices under 0000:af:00.0: cvl_0_0 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:08.311 Found net devices under 0000:af:00.1: cvl_0_1 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.311 16:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:33:08.311 00:33:08.311 --- 10.0.0.2 ping statistics --- 00:33:08.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.311 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:33:08.311 00:33:08.311 --- 10.0.0.1 ping statistics --- 00:33:08.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.311 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1159294 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1159294 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159294 ']' 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:08.311 [2024-12-16 16:38:56.112011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:08.311 [2024-12-16 16:38:56.112052] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.311 [2024-12-16 16:38:56.189332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:08.311 [2024-12-16 16:38:56.211149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.311 [2024-12-16 16:38:56.211186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.311 [2024-12-16 16:38:56.211192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.311 [2024-12-16 16:38:56.211198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.311 [2024-12-16 16:38:56.211203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.311 [2024-12-16 16:38:56.212301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.311 [2024-12-16 16:38:56.212304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1159294 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:08.311 [2024-12-16 16:38:56.511671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:08.311 Malloc0 00:33:08.311 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:08.570 16:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.570 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.828 [2024-12-16 16:38:57.282692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.828 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:09.085 [2024-12-16 16:38:57.479223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1159539 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1159539 /var/tmp/bdevperf.sock 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159539 ']' 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:09.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.085 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:09.343 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.343 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:09.343 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:09.344 16:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:09.909 Nvme0n1 00:33:09.909 16:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:10.168 Nvme0n1 00:33:10.168 16:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:10.168 16:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:12.068 16:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:12.068 16:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:12.326 16:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:12.584 16:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:13.518 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:13.518 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.518 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.518 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.776 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.776 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:13.776 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.776 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.035 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.293 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.293 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.293 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.293 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.551 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.551 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.551 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.552 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.810 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.810 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:14.810 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:15.068 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:15.068 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.443 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.701 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.959 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.959 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.960 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.960 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.218 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.218 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:17.218 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.218 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:17.476 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.476 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:17.476 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:17.734 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:17.734 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.108 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.366 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.366 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.366 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.366 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.366 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.366 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.624 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.624 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.624 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.624 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.882 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.882 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.882 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.882 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:20.139 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.140 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:20.140 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:20.140 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:20.397 16:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:21.771 16:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:21.771 16:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:21.771 16:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.771 16:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.771 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.771 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:21.771 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.771 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.029 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.288 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.288 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:22.288 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.288 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.546 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.546 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:22.546 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.546 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.804 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.804 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:22.804 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:23.061 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:23.061 16:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:24.432 16:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.690 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.947 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.947 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:24.947 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.947 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:25.205 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:25.205 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:25.205 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.205 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:25.463 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:25.463 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:25.463 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:25.721 16:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:25.721 16:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:27.095 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.096 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:27.096 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.096 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:27.354 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.354 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:27.354 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.354 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.612 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.612 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:27.612 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.612 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.870 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.870 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:27.870 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.870 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:28.128 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.128 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:28.128 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:28.128 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:28.386 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.644 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:29.577 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:29.577 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:29.577 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.577 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.836 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.836 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:29.836 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.836 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:30.094 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.094 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:30.094 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.094 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:30.353 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.353 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:30.353 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.353 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.611 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.611 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.611 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.611 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.611 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.611 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.611 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.611 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.869 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.869 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:30.869 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:31.127 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:31.384 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:32.319 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:32.319 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:32.319 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.319 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.577 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.577 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.577 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.577 16:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.835 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.093 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.093 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.093 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.093 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.352 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.352 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.352 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.352 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.610 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.610 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:33.610 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:33.869 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:34.127 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:35.061 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:35.061 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:35.061 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.061 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.319 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.319 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:35.319 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.319 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:35.577 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.577 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:35.577 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.577 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:35.577 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.577 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:35.577 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.577 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:35.835 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.835 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:35.835 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:35.835 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.093 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.093 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.093 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.093 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:36.351 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.351 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:36.351 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:36.610 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:36.610 16:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:37.985 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.243 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.243 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:38.243 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.243 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:38.501 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.501 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:38.501 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.501 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:38.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:38.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:38.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.759 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.759 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:38.759 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.759 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1159539 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159539 ']' 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159539 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:39.017 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159539 00:33:39.018 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:39.018 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:39.018 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159539' 00:33:39.018 killing process with pid 1159539 00:33:39.018 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159539 00:33:39.018 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159539 00:33:39.018 { 00:33:39.018 "results": [ 00:33:39.018 { 00:33:39.018 "job": "Nvme0n1", 00:33:39.018 "core_mask": "0x4", 00:33:39.018 "workload": "verify", 00:33:39.018 "status": "terminated", 00:33:39.018 "verify_range": { 00:33:39.018 "start": 0, 00:33:39.018 "length": 16384 00:33:39.018 }, 00:33:39.018 "queue_depth": 128, 00:33:39.018 "io_size": 4096, 00:33:39.018 "runtime": 28.79857, 00:33:39.018 "iops": 10712.129109188407, 00:33:39.018 "mibps": 41.844254332767214, 00:33:39.018 "io_failed": 0, 00:33:39.018 "io_timeout": 0, 00:33:39.018 "avg_latency_us": 11929.396602789528, 00:33:39.018 "min_latency_us": 286.72, 00:33:39.018 "max_latency_us": 3019898.88 00:33:39.018 } 00:33:39.018 ], 00:33:39.018 "core_count": 1 00:33:39.018 } 00:33:39.278 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1159539 00:33:39.278 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:39.278 [2024-12-16 16:38:57.552724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:39.279 [2024-12-16 16:38:57.552771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159539 ] 00:33:39.279 [2024-12-16 16:38:57.625332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.279 [2024-12-16 16:38:57.647545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:39.279 Running I/O for 90 seconds... 00:33:39.279 11570.00 IOPS, 45.20 MiB/s [2024-12-16T15:39:27.888Z] 11525.00 IOPS, 45.02 MiB/s [2024-12-16T15:39:27.888Z] 11542.00 IOPS, 45.09 MiB/s [2024-12-16T15:39:27.888Z] 11531.25 IOPS, 45.04 MiB/s [2024-12-16T15:39:27.888Z] 11577.60 IOPS, 45.23 MiB/s [2024-12-16T15:39:27.888Z] 11540.50 IOPS, 45.08 MiB/s [2024-12-16T15:39:27.888Z] 11535.14 IOPS, 45.06 MiB/s [2024-12-16T15:39:27.888Z] 11532.75 IOPS, 45.05 MiB/s [2024-12-16T15:39:27.888Z] 11531.44 IOPS, 45.04 MiB/s [2024-12-16T15:39:27.888Z] 11527.40 IOPS, 45.03 MiB/s [2024-12-16T15:39:27.888Z] 11535.18 IOPS, 45.06 MiB/s [2024-12-16T15:39:27.888Z] 11531.92 IOPS, 45.05 MiB/s [2024-12-16T15:39:27.888Z] [2024-12-16 16:39:11.442359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.442546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.442553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.279 [2024-12-16 16:39:11.443597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:39.279 [2024-12-16 16:39:11.443612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.443981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.443987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.280 [2024-12-16 16:39:11.444249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.280 [2024-12-16 16:39:11.444544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:39.280 [2024-12-16 16:39:11.444560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.444990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.444997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.281 [2024-12-16 16:39:11.445560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:39.281 [2024-12-16 16:39:11.445577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:11.445586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:11.445610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:11.445635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:11.445658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:11.445857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:11.445874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:11.445880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:39.282 11287.38 IOPS, 44.09 MiB/s [2024-12-16T15:39:27.891Z] 10481.14 IOPS, 40.94 MiB/s [2024-12-16T15:39:27.891Z] 9782.40 IOPS, 38.21 MiB/s [2024-12-16T15:39:27.891Z] 9373.81 IOPS, 36.62 MiB/s [2024-12-16T15:39:27.891Z] 9498.76 IOPS, 37.10 MiB/s [2024-12-16T15:39:27.891Z] 9610.22 IOPS, 37.54 MiB/s [2024-12-16T15:39:27.891Z] 9793.00 IOPS, 38.25 MiB/s [2024-12-16T15:39:27.891Z] 9984.70 IOPS, 39.00 MiB/s [2024-12-16T15:39:27.891Z] 10144.33 IOPS, 39.63 MiB/s [2024-12-16T15:39:27.891Z] 10201.36 IOPS, 39.85 MiB/s [2024-12-16T15:39:27.891Z] 10257.48 IOPS, 40.07 MiB/s [2024-12-16T15:39:27.891Z] 10331.25 IOPS, 40.36 MiB/s [2024-12-16T15:39:27.891Z] 10466.32 IOPS, 40.88 MiB/s [2024-12-16T15:39:27.891Z] 10590.92 IOPS, 41.37 MiB/s [2024-12-16T15:39:27.891Z] [2024-12-16 16:39:25.184754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.184986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.184998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.185420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.282 [2024-12-16 16:39:25.185428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.186117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:25.186129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.186142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:25.186150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.186162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:25.186169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:39.282 [2024-12-16 16:39:25.186181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.282 [2024-12-16 16:39:25.186188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:39.283 [2024-12-16 16:39:25.187874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.283 [2024-12-16 16:39:25.187880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:39.283 10657.44 IOPS, 41.63 MiB/s [2024-12-16T15:39:27.892Z] 10690.50 IOPS, 41.76 MiB/s [2024-12-16T15:39:27.892Z] Received shutdown signal, test time was about 28.799215 seconds 00:33:39.283 00:33:39.283 Latency(us) 00:33:39.283 [2024-12-16T15:39:27.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.283 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:39.283 Verification LBA range: start 0x0 length 0x4000 00:33:39.283 Nvme0n1 : 28.80 10712.13 41.84 0.00 0.00 11929.40 286.72 3019898.88 00:33:39.283 [2024-12-16T15:39:27.892Z] =================================================================================================================== 00:33:39.283 [2024-12-16T15:39:27.892Z] Total : 10712.13 41.84 0.00 0.00 11929.40 286.72 3019898.88 00:33:39.283 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.541 rmmod nvme_tcp 00:33:39.541 rmmod nvme_fabrics 00:33:39.541 rmmod nvme_keyring 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1159294 ']' 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1159294 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159294 ']' 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159294 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:39.541 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159294 00:33:39.541 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:39.541 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:39.541 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159294' 00:33:39.541 killing process with pid 1159294 00:33:39.541 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159294 00:33:39.541 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159294 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.800 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.839 00:33:41.839 real 0m40.329s 00:33:41.839 user 1m49.339s 00:33:41.839 sys 0m11.485s 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:41.839 ************************************ 00:33:41.839 END TEST nvmf_host_multipath_status 00:33:41.839 ************************************ 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.839 ************************************ 00:33:41.839 START TEST nvmf_discovery_remove_ifc 00:33:41.839 ************************************ 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:41.839 * Looking for test storage... 00:33:41.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:41.839 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.099 --rc genhtml_branch_coverage=1 00:33:42.099 --rc genhtml_function_coverage=1 00:33:42.099 --rc genhtml_legend=1 00:33:42.099 --rc geninfo_all_blocks=1 00:33:42.099 --rc geninfo_unexecuted_blocks=1 00:33:42.099 00:33:42.099 ' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.099 --rc genhtml_branch_coverage=1 00:33:42.099 --rc genhtml_function_coverage=1 00:33:42.099 --rc genhtml_legend=1 00:33:42.099 --rc geninfo_all_blocks=1 00:33:42.099 --rc geninfo_unexecuted_blocks=1 00:33:42.099 00:33:42.099 ' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.099 --rc genhtml_branch_coverage=1 00:33:42.099 --rc genhtml_function_coverage=1 00:33:42.099 --rc genhtml_legend=1 00:33:42.099 --rc geninfo_all_blocks=1 00:33:42.099 --rc geninfo_unexecuted_blocks=1 00:33:42.099 00:33:42.099 ' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.099 --rc genhtml_branch_coverage=1 00:33:42.099 --rc genhtml_function_coverage=1 00:33:42.099 --rc genhtml_legend=1 00:33:42.099 --rc geninfo_all_blocks=1 00:33:42.099 --rc geninfo_unexecuted_blocks=1 00:33:42.099 00:33:42.099 ' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.099 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:42.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:42.100 16:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:48.664 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:48.664 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.664 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:48.665 Found net devices under 0000:af:00.0: cvl_0_0 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:48.665 Found net devices under 0000:af:00.1: cvl_0_1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:33:48.665 00:33:48.665 --- 10.0.0.2 ping statistics --- 00:33:48.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.665 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:33:48.665 00:33:48.665 --- 10.0.0.1 ping statistics --- 00:33:48.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.665 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1168517 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1168517 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168517 ']' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.665 [2024-12-16 16:39:36.446251] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:48.665 [2024-12-16 16:39:36.446305] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.665 [2024-12-16 16:39:36.525630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.665 [2024-12-16 16:39:36.546466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.665 [2024-12-16 16:39:36.546503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.665 [2024-12-16 16:39:36.546510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.665 [2024-12-16 16:39:36.546515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.665 [2024-12-16 16:39:36.546520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.665 [2024-12-16 16:39:36.547012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.665 [2024-12-16 16:39:36.684911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.665 [2024-12-16 16:39:36.693061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:48.665 null0 00:33:48.665 [2024-12-16 16:39:36.725063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1168638 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1168638 /tmp/host.sock 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168638 ']' 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:48.665 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.665 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.665 [2024-12-16 16:39:36.794686] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:48.665 [2024-12-16 16:39:36.794725] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168638 ] 00:33:48.665 [2024-12-16 16:39:36.869717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.666 [2024-12-16 16:39:36.892453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.666 16:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.666 16:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.666 16:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:48.666 16:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.666 16:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.600 [2024-12-16 16:39:38.081285] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:49.600 [2024-12-16 16:39:38.081307] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:49.600 [2024-12-16 16:39:38.081323] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:49.600 [2024-12-16 16:39:38.167571] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:49.859 [2024-12-16 16:39:38.342581] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:49.859 [2024-12-16 16:39:38.343230] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x918710:1 started. 00:33:49.859 [2024-12-16 16:39:38.344517] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:49.859 [2024-12-16 16:39:38.344556] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:49.859 [2024-12-16 16:39:38.344574] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:49.859 [2024-12-16 16:39:38.344585] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:49.859 [2024-12-16 16:39:38.344601] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:49.859 [2024-12-16 16:39:38.350414] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x918710 was disconnected and freed. delete nvme_qpair. 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:49.859 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:50.117 16:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:51.053 16:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:52.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:53.364 16:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:54.298 16:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.233 [2024-12-16 16:39:43.786276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:55.233 [2024-12-16 16:39:43.786315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.233 [2024-12-16 16:39:43.786326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.233 [2024-12-16 16:39:43.786335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.233 [2024-12-16 16:39:43.786342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.233 [2024-12-16 16:39:43.786349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.233 [2024-12-16 16:39:43.786356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.233 [2024-12-16 16:39:43.786362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.233 [2024-12-16 16:39:43.786369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.233 [2024-12-16 16:39:43.786376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.233 [2024-12-16 16:39:43.786383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.233 [2024-12-16 16:39:43.786389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f4ec0 is same with the state(6) to be set 00:33:55.233 [2024-12-16 16:39:43.796297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4ec0 (9): Bad file descriptor 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:55.233 16:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:55.233 [2024-12-16 16:39:43.806333] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:55.233 [2024-12-16 16:39:43.806346] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:55.233 [2024-12-16 16:39:43.806352] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:55.233 [2024-12-16 16:39:43.806356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:55.233 [2024-12-16 16:39:43.806377] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:56.609 [2024-12-16 16:39:44.844143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:56.609 [2024-12-16 16:39:44.844221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f4ec0 with addr=10.0.0.2, port=4420 00:33:56.609 [2024-12-16 16:39:44.844253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f4ec0 is same with the state(6) to be set 00:33:56.609 [2024-12-16 16:39:44.844302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4ec0 (9): Bad file descriptor 00:33:56.609 [2024-12-16 16:39:44.845243] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:56.609 [2024-12-16 16:39:44.845307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:56.609 [2024-12-16 16:39:44.845331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:56.609 [2024-12-16 16:39:44.845355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:56.609 [2024-12-16 16:39:44.845376] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:56.609 [2024-12-16 16:39:44.845391] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:56.609 [2024-12-16 16:39:44.845404] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:56.609 [2024-12-16 16:39:44.845425] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:56.609 [2024-12-16 16:39:44.845440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:56.609 16:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:57.544 [2024-12-16 16:39:45.847948] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.544 [2024-12-16 16:39:45.847968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.544 [2024-12-16 16:39:45.847982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.544 [2024-12-16 16:39:45.847988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.544 [2024-12-16 16:39:45.847995] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:57.544 [2024-12-16 16:39:45.848002] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.544 [2024-12-16 16:39:45.848008] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.544 [2024-12-16 16:39:45.848013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.544 [2024-12-16 16:39:45.848035] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:57.544 [2024-12-16 16:39:45.848053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.544 [2024-12-16 16:39:45.848061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.544 [2024-12-16 16:39:45.848070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.544 [2024-12-16 16:39:45.848077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.544 [2024-12-16 16:39:45.848084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.544 [2024-12-16 16:39:45.848090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.544 [2024-12-16 16:39:45.848102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.544 [2024-12-16 16:39:45.848108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.544 [2024-12-16 16:39:45.848115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.544 [2024-12-16 16:39:45.848121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.544 [2024-12-16 16:39:45.848128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:57.544 [2024-12-16 16:39:45.848474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e45e0 (9): Bad file descriptor 00:33:57.544 [2024-12-16 16:39:45.849485] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:57.544 [2024-12-16 16:39:45.849495] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.544 16:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.544 16:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.544 16:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:57.544 16:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:58.480 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.738 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:58.739 16:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:59.304 [2024-12-16 16:39:47.906620] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:59.304 [2024-12-16 16:39:47.906636] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:59.304 [2024-12-16 16:39:47.906649] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:59.563 [2024-12-16 16:39:48.033025] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:59.563 [2024-12-16 16:39:48.087563] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:59.563 [2024-12-16 16:39:48.088166] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x8f5260:1 started. 00:33:59.563 [2024-12-16 16:39:48.089186] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:59.563 [2024-12-16 16:39:48.089216] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:59.563 [2024-12-16 16:39:48.089232] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:59.563 [2024-12-16 16:39:48.089244] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:59.563 [2024-12-16 16:39:48.089253] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:59.563 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:59.563 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.563 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.564 [2024-12-16 16:39:48.135652] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x8f5260 was disconnected and freed. delete nvme_qpair. 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1168638 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168638 ']' 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168638 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.564 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168638 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168638' 00:33:59.823 killing process with pid 1168638 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168638 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168638 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.823 rmmod nvme_tcp 00:33:59.823 rmmod nvme_fabrics 00:33:59.823 rmmod nvme_keyring 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1168517 ']' 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1168517 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168517 ']' 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168517 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.823 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168517 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168517' 00:34:00.082 killing process with pid 1168517 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168517 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168517 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.082 16:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:02.618 00:34:02.618 real 0m20.327s 00:34:02.618 user 0m24.619s 00:34:02.618 sys 0m5.756s 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:02.618 ************************************ 00:34:02.618 END TEST nvmf_discovery_remove_ifc 00:34:02.618 ************************************ 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.618 ************************************ 00:34:02.618 START TEST nvmf_identify_kernel_target 00:34:02.618 ************************************ 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:02.618 * Looking for test storage... 00:34:02.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.618 --rc genhtml_branch_coverage=1 00:34:02.618 --rc genhtml_function_coverage=1 00:34:02.618 --rc genhtml_legend=1 00:34:02.618 --rc geninfo_all_blocks=1 00:34:02.618 --rc geninfo_unexecuted_blocks=1 00:34:02.618 00:34:02.618 ' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.618 --rc genhtml_branch_coverage=1 00:34:02.618 --rc genhtml_function_coverage=1 00:34:02.618 --rc genhtml_legend=1 00:34:02.618 --rc geninfo_all_blocks=1 00:34:02.618 --rc geninfo_unexecuted_blocks=1 00:34:02.618 00:34:02.618 ' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.618 --rc genhtml_branch_coverage=1 00:34:02.618 --rc genhtml_function_coverage=1 00:34:02.618 --rc genhtml_legend=1 00:34:02.618 --rc geninfo_all_blocks=1 00:34:02.618 --rc geninfo_unexecuted_blocks=1 00:34:02.618 00:34:02.618 ' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.618 --rc genhtml_branch_coverage=1 00:34:02.618 --rc genhtml_function_coverage=1 00:34:02.618 --rc genhtml_legend=1 00:34:02.618 --rc geninfo_all_blocks=1 00:34:02.618 --rc geninfo_unexecuted_blocks=1 00:34:02.618 00:34:02.618 ' 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.618 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:02.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:02.619 16:39:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:09.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:09.188 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:09.188 Found net devices under 0000:af:00.0: cvl_0_0 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:09.189 Found net devices under 0000:af:00.1: cvl_0_1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:34:09.189 00:34:09.189 --- 10.0.0.2 ping statistics --- 00:34:09.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.189 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:34:09.189 00:34:09.189 --- 10.0.0.1 ping statistics --- 00:34:09.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.189 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:09.189 16:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:11.093 Waiting for block devices as requested 00:34:11.093 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:11.352 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.352 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.352 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.352 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.611 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:11.611 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:11.611 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:11.869 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:11.869 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.869 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.869 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:12.128 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:12.128 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.128 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:12.386 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:12.386 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:12.386 No valid GPT data, bailing 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:12.386 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:12.646 16:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:12.646 00:34:12.646 Discovery Log Number of Records 2, Generation counter 2 00:34:12.646 =====Discovery Log Entry 0====== 00:34:12.646 trtype: tcp 00:34:12.646 adrfam: ipv4 00:34:12.646 subtype: current discovery subsystem 00:34:12.646 treq: not specified, sq flow control disable supported 00:34:12.646 portid: 1 00:34:12.646 trsvcid: 4420 00:34:12.646 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:12.646 traddr: 10.0.0.1 00:34:12.646 eflags: none 00:34:12.646 sectype: none 00:34:12.646 =====Discovery Log Entry 1====== 00:34:12.646 trtype: tcp 00:34:12.646 adrfam: ipv4 00:34:12.646 subtype: nvme subsystem 00:34:12.646 treq: not specified, sq flow control disable supported 00:34:12.646 portid: 1 00:34:12.646 trsvcid: 4420 00:34:12.646 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:12.646 traddr: 10.0.0.1 00:34:12.646 eflags: none 00:34:12.646 sectype: none 00:34:12.646 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:12.646 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:12.646 ===================================================== 00:34:12.646 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:12.646 ===================================================== 00:34:12.646 Controller Capabilities/Features 00:34:12.646 ================================ 00:34:12.646 Vendor ID: 0000 00:34:12.646 Subsystem Vendor ID: 0000 00:34:12.646 Serial Number: 85a682a1eb53fff0fdac 00:34:12.646 Model Number: Linux 00:34:12.646 Firmware Version: 6.8.9-20 00:34:12.646 Recommended Arb Burst: 0 00:34:12.646 IEEE OUI Identifier: 00 00 00 00:34:12.646 Multi-path I/O 00:34:12.646 May have multiple subsystem ports: No 00:34:12.646 May have multiple controllers: No 00:34:12.646 Associated with SR-IOV VF: No 00:34:12.646 Max Data Transfer Size: Unlimited 00:34:12.646 Max Number of Namespaces: 0 00:34:12.646 Max Number of I/O Queues: 1024 00:34:12.646 NVMe Specification Version (VS): 1.3 00:34:12.646 NVMe Specification Version (Identify): 1.3 00:34:12.646 Maximum Queue Entries: 1024 00:34:12.646 Contiguous Queues Required: No 00:34:12.646 Arbitration Mechanisms Supported 00:34:12.646 Weighted Round Robin: Not Supported 00:34:12.646 Vendor Specific: Not Supported 00:34:12.646 Reset Timeout: 7500 ms 00:34:12.646 Doorbell Stride: 4 bytes 00:34:12.646 NVM Subsystem Reset: Not Supported 00:34:12.646 Command Sets Supported 00:34:12.646 NVM Command Set: Supported 00:34:12.646 Boot Partition: Not Supported 00:34:12.646 Memory Page Size Minimum: 4096 bytes 00:34:12.646 Memory Page Size Maximum: 4096 bytes 00:34:12.646 Persistent Memory Region: Not Supported 00:34:12.646 Optional Asynchronous Events Supported 00:34:12.646 Namespace Attribute Notices: Not Supported 00:34:12.646 Firmware Activation Notices: Not Supported 00:34:12.646 ANA Change Notices: Not Supported 00:34:12.646 PLE Aggregate Log Change Notices: Not Supported 00:34:12.646 LBA Status Info Alert Notices: Not Supported 00:34:12.646 EGE Aggregate Log Change Notices: Not Supported 00:34:12.646 Normal NVM Subsystem Shutdown event: Not Supported 00:34:12.646 Zone Descriptor Change Notices: Not Supported 00:34:12.646 Discovery Log Change Notices: Supported 00:34:12.646 Controller Attributes 00:34:12.646 128-bit Host Identifier: Not Supported 00:34:12.646 Non-Operational Permissive Mode: Not Supported 00:34:12.646 NVM Sets: Not Supported 00:34:12.646 Read Recovery Levels: Not Supported 00:34:12.646 Endurance Groups: Not Supported 00:34:12.646 Predictable Latency Mode: Not Supported 00:34:12.646 Traffic Based Keep ALive: Not Supported 00:34:12.646 Namespace Granularity: Not Supported 00:34:12.646 SQ Associations: Not Supported 00:34:12.646 UUID List: Not Supported 00:34:12.646 Multi-Domain Subsystem: Not Supported 00:34:12.646 Fixed Capacity Management: Not Supported 00:34:12.646 Variable Capacity Management: Not Supported 00:34:12.646 Delete Endurance Group: Not Supported 00:34:12.646 Delete NVM Set: Not Supported 00:34:12.646 Extended LBA Formats Supported: Not Supported 00:34:12.646 Flexible Data Placement Supported: Not Supported 00:34:12.646 00:34:12.646 Controller Memory Buffer Support 00:34:12.646 ================================ 00:34:12.646 Supported: No 00:34:12.646 00:34:12.646 Persistent Memory Region Support 00:34:12.646 ================================ 00:34:12.646 Supported: No 00:34:12.646 00:34:12.646 Admin Command Set Attributes 00:34:12.646 ============================ 00:34:12.646 Security Send/Receive: Not Supported 00:34:12.646 Format NVM: Not Supported 00:34:12.646 Firmware Activate/Download: Not Supported 00:34:12.646 Namespace Management: Not Supported 00:34:12.646 Device Self-Test: Not Supported 00:34:12.646 Directives: Not Supported 00:34:12.646 NVMe-MI: Not Supported 00:34:12.646 Virtualization Management: Not Supported 00:34:12.646 Doorbell Buffer Config: Not Supported 00:34:12.646 Get LBA Status Capability: Not Supported 00:34:12.646 Command & Feature Lockdown Capability: Not Supported 00:34:12.646 Abort Command Limit: 1 00:34:12.646 Async Event Request Limit: 1 00:34:12.646 Number of Firmware Slots: N/A 00:34:12.646 Firmware Slot 1 Read-Only: N/A 00:34:12.646 Firmware Activation Without Reset: N/A 00:34:12.646 Multiple Update Detection Support: N/A 00:34:12.646 Firmware Update Granularity: No Information Provided 00:34:12.646 Per-Namespace SMART Log: No 00:34:12.646 Asymmetric Namespace Access Log Page: Not Supported 00:34:12.646 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:12.646 Command Effects Log Page: Not Supported 00:34:12.646 Get Log Page Extended Data: Supported 00:34:12.646 Telemetry Log Pages: Not Supported 00:34:12.646 Persistent Event Log Pages: Not Supported 00:34:12.646 Supported Log Pages Log Page: May Support 00:34:12.646 Commands Supported & Effects Log Page: Not Supported 00:34:12.646 Feature Identifiers & Effects Log Page:May Support 00:34:12.646 NVMe-MI Commands & Effects Log Page: May Support 00:34:12.646 Data Area 4 for Telemetry Log: Not Supported 00:34:12.646 Error Log Page Entries Supported: 1 00:34:12.646 Keep Alive: Not Supported 00:34:12.646 00:34:12.646 NVM Command Set Attributes 00:34:12.646 ========================== 00:34:12.646 Submission Queue Entry Size 00:34:12.646 Max: 1 00:34:12.646 Min: 1 00:34:12.646 Completion Queue Entry Size 00:34:12.646 Max: 1 00:34:12.646 Min: 1 00:34:12.646 Number of Namespaces: 0 00:34:12.646 Compare Command: Not Supported 00:34:12.646 Write Uncorrectable Command: Not Supported 00:34:12.646 Dataset Management Command: Not Supported 00:34:12.646 Write Zeroes Command: Not Supported 00:34:12.646 Set Features Save Field: Not Supported 00:34:12.646 Reservations: Not Supported 00:34:12.646 Timestamp: Not Supported 00:34:12.646 Copy: Not Supported 00:34:12.646 Volatile Write Cache: Not Present 00:34:12.646 Atomic Write Unit (Normal): 1 00:34:12.646 Atomic Write Unit (PFail): 1 00:34:12.646 Atomic Compare & Write Unit: 1 00:34:12.646 Fused Compare & Write: Not Supported 00:34:12.646 Scatter-Gather List 00:34:12.646 SGL Command Set: Supported 00:34:12.646 SGL Keyed: Not Supported 00:34:12.646 SGL Bit Bucket Descriptor: Not Supported 00:34:12.646 SGL Metadata Pointer: Not Supported 00:34:12.646 Oversized SGL: Not Supported 00:34:12.646 SGL Metadata Address: Not Supported 00:34:12.646 SGL Offset: Supported 00:34:12.646 Transport SGL Data Block: Not Supported 00:34:12.646 Replay Protected Memory Block: Not Supported 00:34:12.646 00:34:12.646 Firmware Slot Information 00:34:12.646 ========================= 00:34:12.646 Active slot: 0 00:34:12.646 00:34:12.646 00:34:12.646 Error Log 00:34:12.646 ========= 00:34:12.646 00:34:12.647 Active Namespaces 00:34:12.647 ================= 00:34:12.647 Discovery Log Page 00:34:12.647 ================== 00:34:12.647 Generation Counter: 2 00:34:12.647 Number of Records: 2 00:34:12.647 Record Format: 0 00:34:12.647 00:34:12.647 Discovery Log Entry 0 00:34:12.647 ---------------------- 00:34:12.647 Transport Type: 3 (TCP) 00:34:12.647 Address Family: 1 (IPv4) 00:34:12.647 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:12.647 Entry Flags: 00:34:12.647 Duplicate Returned Information: 0 00:34:12.647 Explicit Persistent Connection Support for Discovery: 0 00:34:12.647 Transport Requirements: 00:34:12.647 Secure Channel: Not Specified 00:34:12.647 Port ID: 1 (0x0001) 00:34:12.647 Controller ID: 65535 (0xffff) 00:34:12.647 Admin Max SQ Size: 32 00:34:12.647 Transport Service Identifier: 4420 00:34:12.647 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:12.647 Transport Address: 10.0.0.1 00:34:12.647 Discovery Log Entry 1 00:34:12.647 ---------------------- 00:34:12.647 Transport Type: 3 (TCP) 00:34:12.647 Address Family: 1 (IPv4) 00:34:12.647 Subsystem Type: 2 (NVM Subsystem) 00:34:12.647 Entry Flags: 00:34:12.647 Duplicate Returned Information: 0 00:34:12.647 Explicit Persistent Connection Support for Discovery: 0 00:34:12.647 Transport Requirements: 00:34:12.647 Secure Channel: Not Specified 00:34:12.647 Port ID: 1 (0x0001) 00:34:12.647 Controller ID: 65535 (0xffff) 00:34:12.647 Admin Max SQ Size: 32 00:34:12.647 Transport Service Identifier: 4420 00:34:12.647 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:12.647 Transport Address: 10.0.0.1 00:34:12.647 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:12.906 get_feature(0x01) failed 00:34:12.906 get_feature(0x02) failed 00:34:12.906 get_feature(0x04) failed 00:34:12.906 ===================================================== 00:34:12.906 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:12.906 ===================================================== 00:34:12.906 Controller Capabilities/Features 00:34:12.906 ================================ 00:34:12.906 Vendor ID: 0000 00:34:12.906 Subsystem Vendor ID: 0000 00:34:12.906 Serial Number: 6d82fda0e9853238081c 00:34:12.906 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:12.906 Firmware Version: 6.8.9-20 00:34:12.906 Recommended Arb Burst: 6 00:34:12.907 IEEE OUI Identifier: 00 00 00 00:34:12.907 Multi-path I/O 00:34:12.907 May have multiple subsystem ports: Yes 00:34:12.907 May have multiple controllers: Yes 00:34:12.907 Associated with SR-IOV VF: No 00:34:12.907 Max Data Transfer Size: Unlimited 00:34:12.907 Max Number of Namespaces: 1024 00:34:12.907 Max Number of I/O Queues: 128 00:34:12.907 NVMe Specification Version (VS): 1.3 00:34:12.907 NVMe Specification Version (Identify): 1.3 00:34:12.907 Maximum Queue Entries: 1024 00:34:12.907 Contiguous Queues Required: No 00:34:12.907 Arbitration Mechanisms Supported 00:34:12.907 Weighted Round Robin: Not Supported 00:34:12.907 Vendor Specific: Not Supported 00:34:12.907 Reset Timeout: 7500 ms 00:34:12.907 Doorbell Stride: 4 bytes 00:34:12.907 NVM Subsystem Reset: Not Supported 00:34:12.907 Command Sets Supported 00:34:12.907 NVM Command Set: Supported 00:34:12.907 Boot Partition: Not Supported 00:34:12.907 Memory Page Size Minimum: 4096 bytes 00:34:12.907 Memory Page Size Maximum: 4096 bytes 00:34:12.907 Persistent Memory Region: Not Supported 00:34:12.907 Optional Asynchronous Events Supported 00:34:12.907 Namespace Attribute Notices: Supported 00:34:12.907 Firmware Activation Notices: Not Supported 00:34:12.907 ANA Change Notices: Supported 00:34:12.907 PLE Aggregate Log Change Notices: Not Supported 00:34:12.907 LBA Status Info Alert Notices: Not Supported 00:34:12.907 EGE Aggregate Log Change Notices: Not Supported 00:34:12.907 Normal NVM Subsystem Shutdown event: Not Supported 00:34:12.907 Zone Descriptor Change Notices: Not Supported 00:34:12.907 Discovery Log Change Notices: Not Supported 00:34:12.907 Controller Attributes 00:34:12.907 128-bit Host Identifier: Supported 00:34:12.907 Non-Operational Permissive Mode: Not Supported 00:34:12.907 NVM Sets: Not Supported 00:34:12.907 Read Recovery Levels: Not Supported 00:34:12.907 Endurance Groups: Not Supported 00:34:12.907 Predictable Latency Mode: Not Supported 00:34:12.907 Traffic Based Keep ALive: Supported 00:34:12.907 Namespace Granularity: Not Supported 00:34:12.907 SQ Associations: Not Supported 00:34:12.907 UUID List: Not Supported 00:34:12.907 Multi-Domain Subsystem: Not Supported 00:34:12.907 Fixed Capacity Management: Not Supported 00:34:12.907 Variable Capacity Management: Not Supported 00:34:12.907 Delete Endurance Group: Not Supported 00:34:12.907 Delete NVM Set: Not Supported 00:34:12.907 Extended LBA Formats Supported: Not Supported 00:34:12.907 Flexible Data Placement Supported: Not Supported 00:34:12.907 00:34:12.907 Controller Memory Buffer Support 00:34:12.907 ================================ 00:34:12.907 Supported: No 00:34:12.907 00:34:12.907 Persistent Memory Region Support 00:34:12.907 ================================ 00:34:12.907 Supported: No 00:34:12.907 00:34:12.907 Admin Command Set Attributes 00:34:12.907 ============================ 00:34:12.907 Security Send/Receive: Not Supported 00:34:12.907 Format NVM: Not Supported 00:34:12.907 Firmware Activate/Download: Not Supported 00:34:12.907 Namespace Management: Not Supported 00:34:12.907 Device Self-Test: Not Supported 00:34:12.907 Directives: Not Supported 00:34:12.907 NVMe-MI: Not Supported 00:34:12.907 Virtualization Management: Not Supported 00:34:12.907 Doorbell Buffer Config: Not Supported 00:34:12.907 Get LBA Status Capability: Not Supported 00:34:12.907 Command & Feature Lockdown Capability: Not Supported 00:34:12.907 Abort Command Limit: 4 00:34:12.907 Async Event Request Limit: 4 00:34:12.907 Number of Firmware Slots: N/A 00:34:12.907 Firmware Slot 1 Read-Only: N/A 00:34:12.907 Firmware Activation Without Reset: N/A 00:34:12.907 Multiple Update Detection Support: N/A 00:34:12.907 Firmware Update Granularity: No Information Provided 00:34:12.907 Per-Namespace SMART Log: Yes 00:34:12.907 Asymmetric Namespace Access Log Page: Supported 00:34:12.907 ANA Transition Time : 10 sec 00:34:12.907 00:34:12.907 Asymmetric Namespace Access Capabilities 00:34:12.907 ANA Optimized State : Supported 00:34:12.907 ANA Non-Optimized State : Supported 00:34:12.907 ANA Inaccessible State : Supported 00:34:12.907 ANA Persistent Loss State : Supported 00:34:12.907 ANA Change State : Supported 00:34:12.907 ANAGRPID is not changed : No 00:34:12.907 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:12.907 00:34:12.907 ANA Group Identifier Maximum : 128 00:34:12.907 Number of ANA Group Identifiers : 128 00:34:12.907 Max Number of Allowed Namespaces : 1024 00:34:12.907 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:12.907 Command Effects Log Page: Supported 00:34:12.907 Get Log Page Extended Data: Supported 00:34:12.907 Telemetry Log Pages: Not Supported 00:34:12.907 Persistent Event Log Pages: Not Supported 00:34:12.907 Supported Log Pages Log Page: May Support 00:34:12.907 Commands Supported & Effects Log Page: Not Supported 00:34:12.907 Feature Identifiers & Effects Log Page:May Support 00:34:12.907 NVMe-MI Commands & Effects Log Page: May Support 00:34:12.907 Data Area 4 for Telemetry Log: Not Supported 00:34:12.907 Error Log Page Entries Supported: 128 00:34:12.907 Keep Alive: Supported 00:34:12.907 Keep Alive Granularity: 1000 ms 00:34:12.907 00:34:12.907 NVM Command Set Attributes 00:34:12.907 ========================== 00:34:12.907 Submission Queue Entry Size 00:34:12.907 Max: 64 00:34:12.907 Min: 64 00:34:12.907 Completion Queue Entry Size 00:34:12.907 Max: 16 00:34:12.907 Min: 16 00:34:12.907 Number of Namespaces: 1024 00:34:12.907 Compare Command: Not Supported 00:34:12.907 Write Uncorrectable Command: Not Supported 00:34:12.907 Dataset Management Command: Supported 00:34:12.907 Write Zeroes Command: Supported 00:34:12.907 Set Features Save Field: Not Supported 00:34:12.907 Reservations: Not Supported 00:34:12.907 Timestamp: Not Supported 00:34:12.907 Copy: Not Supported 00:34:12.907 Volatile Write Cache: Present 00:34:12.907 Atomic Write Unit (Normal): 1 00:34:12.907 Atomic Write Unit (PFail): 1 00:34:12.907 Atomic Compare & Write Unit: 1 00:34:12.907 Fused Compare & Write: Not Supported 00:34:12.907 Scatter-Gather List 00:34:12.907 SGL Command Set: Supported 00:34:12.907 SGL Keyed: Not Supported 00:34:12.907 SGL Bit Bucket Descriptor: Not Supported 00:34:12.907 SGL Metadata Pointer: Not Supported 00:34:12.907 Oversized SGL: Not Supported 00:34:12.907 SGL Metadata Address: Not Supported 00:34:12.907 SGL Offset: Supported 00:34:12.907 Transport SGL Data Block: Not Supported 00:34:12.907 Replay Protected Memory Block: Not Supported 00:34:12.907 00:34:12.907 Firmware Slot Information 00:34:12.907 ========================= 00:34:12.907 Active slot: 0 00:34:12.907 00:34:12.907 Asymmetric Namespace Access 00:34:12.907 =========================== 00:34:12.907 Change Count : 0 00:34:12.907 Number of ANA Group Descriptors : 1 00:34:12.907 ANA Group Descriptor : 0 00:34:12.907 ANA Group ID : 1 00:34:12.907 Number of NSID Values : 1 00:34:12.907 Change Count : 0 00:34:12.907 ANA State : 1 00:34:12.907 Namespace Identifier : 1 00:34:12.907 00:34:12.907 Commands Supported and Effects 00:34:12.907 ============================== 00:34:12.907 Admin Commands 00:34:12.907 -------------- 00:34:12.907 Get Log Page (02h): Supported 00:34:12.907 Identify (06h): Supported 00:34:12.907 Abort (08h): Supported 00:34:12.907 Set Features (09h): Supported 00:34:12.907 Get Features (0Ah): Supported 00:34:12.907 Asynchronous Event Request (0Ch): Supported 00:34:12.907 Keep Alive (18h): Supported 00:34:12.907 I/O Commands 00:34:12.907 ------------ 00:34:12.907 Flush (00h): Supported 00:34:12.907 Write (01h): Supported LBA-Change 00:34:12.907 Read (02h): Supported 00:34:12.907 Write Zeroes (08h): Supported LBA-Change 00:34:12.907 Dataset Management (09h): Supported 00:34:12.907 00:34:12.907 Error Log 00:34:12.907 ========= 00:34:12.907 Entry: 0 00:34:12.907 Error Count: 0x3 00:34:12.907 Submission Queue Id: 0x0 00:34:12.907 Command Id: 0x5 00:34:12.907 Phase Bit: 0 00:34:12.907 Status Code: 0x2 00:34:12.907 Status Code Type: 0x0 00:34:12.907 Do Not Retry: 1 00:34:12.907 Error Location: 0x28 00:34:12.907 LBA: 0x0 00:34:12.907 Namespace: 0x0 00:34:12.907 Vendor Log Page: 0x0 00:34:12.907 ----------- 00:34:12.907 Entry: 1 00:34:12.907 Error Count: 0x2 00:34:12.907 Submission Queue Id: 0x0 00:34:12.907 Command Id: 0x5 00:34:12.907 Phase Bit: 0 00:34:12.907 Status Code: 0x2 00:34:12.907 Status Code Type: 0x0 00:34:12.907 Do Not Retry: 1 00:34:12.907 Error Location: 0x28 00:34:12.907 LBA: 0x0 00:34:12.907 Namespace: 0x0 00:34:12.907 Vendor Log Page: 0x0 00:34:12.907 ----------- 00:34:12.907 Entry: 2 00:34:12.907 Error Count: 0x1 00:34:12.907 Submission Queue Id: 0x0 00:34:12.907 Command Id: 0x4 00:34:12.907 Phase Bit: 0 00:34:12.907 Status Code: 0x2 00:34:12.907 Status Code Type: 0x0 00:34:12.907 Do Not Retry: 1 00:34:12.907 Error Location: 0x28 00:34:12.908 LBA: 0x0 00:34:12.908 Namespace: 0x0 00:34:12.908 Vendor Log Page: 0x0 00:34:12.908 00:34:12.908 Number of Queues 00:34:12.908 ================ 00:34:12.908 Number of I/O Submission Queues: 128 00:34:12.908 Number of I/O Completion Queues: 128 00:34:12.908 00:34:12.908 ZNS Specific Controller Data 00:34:12.908 ============================ 00:34:12.908 Zone Append Size Limit: 0 00:34:12.908 00:34:12.908 00:34:12.908 Active Namespaces 00:34:12.908 ================= 00:34:12.908 get_feature(0x05) failed 00:34:12.908 Namespace ID:1 00:34:12.908 Command Set Identifier: NVM (00h) 00:34:12.908 Deallocate: Supported 00:34:12.908 Deallocated/Unwritten Error: Not Supported 00:34:12.908 Deallocated Read Value: Unknown 00:34:12.908 Deallocate in Write Zeroes: Not Supported 00:34:12.908 Deallocated Guard Field: 0xFFFF 00:34:12.908 Flush: Supported 00:34:12.908 Reservation: Not Supported 00:34:12.908 Namespace Sharing Capabilities: Multiple Controllers 00:34:12.908 Size (in LBAs): 1953525168 (931GiB) 00:34:12.908 Capacity (in LBAs): 1953525168 (931GiB) 00:34:12.908 Utilization (in LBAs): 1953525168 (931GiB) 00:34:12.908 UUID: 112012c9-d2da-44e9-b874-2d01bffbc6d7 00:34:12.908 Thin Provisioning: Not Supported 00:34:12.908 Per-NS Atomic Units: Yes 00:34:12.908 Atomic Boundary Size (Normal): 0 00:34:12.908 Atomic Boundary Size (PFail): 0 00:34:12.908 Atomic Boundary Offset: 0 00:34:12.908 NGUID/EUI64 Never Reused: No 00:34:12.908 ANA group ID: 1 00:34:12.908 Namespace Write Protected: No 00:34:12.908 Number of LBA Formats: 1 00:34:12.908 Current LBA Format: LBA Format #00 00:34:12.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:12.908 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.908 rmmod nvme_tcp 00:34:12.908 rmmod nvme_fabrics 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.908 16:40:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.811 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.811 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:14.811 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:14.811 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:15.070 16:40:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:18.356 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:18.356 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:18.923 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:18.923 00:34:18.923 real 0m16.614s 00:34:18.923 user 0m4.320s 00:34:18.923 sys 0m8.673s 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.923 ************************************ 00:34:18.923 END TEST nvmf_identify_kernel_target 00:34:18.923 ************************************ 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.923 ************************************ 00:34:18.923 START TEST nvmf_auth_host 00:34:18.923 ************************************ 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:18.923 * Looking for test storage... 00:34:18.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:18.923 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.183 --rc genhtml_branch_coverage=1 00:34:19.183 --rc genhtml_function_coverage=1 00:34:19.183 --rc genhtml_legend=1 00:34:19.183 --rc geninfo_all_blocks=1 00:34:19.183 --rc geninfo_unexecuted_blocks=1 00:34:19.183 00:34:19.183 ' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.183 --rc genhtml_branch_coverage=1 00:34:19.183 --rc genhtml_function_coverage=1 00:34:19.183 --rc genhtml_legend=1 00:34:19.183 --rc geninfo_all_blocks=1 00:34:19.183 --rc geninfo_unexecuted_blocks=1 00:34:19.183 00:34:19.183 ' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.183 --rc genhtml_branch_coverage=1 00:34:19.183 --rc genhtml_function_coverage=1 00:34:19.183 --rc genhtml_legend=1 00:34:19.183 --rc geninfo_all_blocks=1 00:34:19.183 --rc geninfo_unexecuted_blocks=1 00:34:19.183 00:34:19.183 ' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.183 --rc genhtml_branch_coverage=1 00:34:19.183 --rc genhtml_function_coverage=1 00:34:19.183 --rc genhtml_legend=1 00:34:19.183 --rc geninfo_all_blocks=1 00:34:19.183 --rc geninfo_unexecuted_blocks=1 00:34:19.183 00:34:19.183 ' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.183 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.184 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:25.752 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:25.752 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:25.752 Found net devices under 0000:af:00.0: cvl_0_0 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:25.752 Found net devices under 0000:af:00.1: cvl_0_1 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.752 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:34:25.753 00:34:25.753 --- 10.0.0.2 ping statistics --- 00:34:25.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.753 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:34:25.753 00:34:25.753 --- 10.0.0.1 ping statistics --- 00:34:25.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.753 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1180193 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1180193 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180193 ']' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da276a1f83cf7b988cb8645535fe1d99 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.my4 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da276a1f83cf7b988cb8645535fe1d99 0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da276a1f83cf7b988cb8645535fe1d99 0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da276a1f83cf7b988cb8645535fe1d99 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.my4 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.my4 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.my4 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1372f01fd6a3e5da87d5e1ac03fbb7553551f600270f82c845228dc3399723b 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0Re 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1372f01fd6a3e5da87d5e1ac03fbb7553551f600270f82c845228dc3399723b 3 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1372f01fd6a3e5da87d5e1ac03fbb7553551f600270f82c845228dc3399723b 3 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1372f01fd6a3e5da87d5e1ac03fbb7553551f600270f82c845228dc3399723b 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0Re 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0Re 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0Re 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb2b37fd6e78f7150b841052cd46291361f0e4e25754f6e2 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PPY 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb2b37fd6e78f7150b841052cd46291361f0e4e25754f6e2 0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb2b37fd6e78f7150b841052cd46291361f0e4e25754f6e2 0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb2b37fd6e78f7150b841052cd46291361f0e4e25754f6e2 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.753 16:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PPY 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PPY 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.PPY 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=540a182e145695353c488d1d9ac33a13a06b1ab4df3fbabf 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:25.753 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gjB 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 540a182e145695353c488d1d9ac33a13a06b1ab4df3fbabf 2 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 540a182e145695353c488d1d9ac33a13a06b1ab4df3fbabf 2 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=540a182e145695353c488d1d9ac33a13a06b1ab4df3fbabf 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gjB 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gjB 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gjB 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0212dd6b8a9166b9de89709b972f7530 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7hH 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0212dd6b8a9166b9de89709b972f7530 1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0212dd6b8a9166b9de89709b972f7530 1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0212dd6b8a9166b9de89709b972f7530 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7hH 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7hH 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7hH 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8960b3d8016c10114bf09a633c5f8013 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3Lk 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8960b3d8016c10114bf09a633c5f8013 1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8960b3d8016c10114bf09a633c5f8013 1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8960b3d8016c10114bf09a633c5f8013 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3Lk 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3Lk 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3Lk 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c72cd813a5893558ea9ed30fd88bc0eb874c3d08856546a 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.E0i 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c72cd813a5893558ea9ed30fd88bc0eb874c3d08856546a 2 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c72cd813a5893558ea9ed30fd88bc0eb874c3d08856546a 2 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c72cd813a5893558ea9ed30fd88bc0eb874c3d08856546a 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.E0i 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.E0i 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.E0i 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b2828ef0c483e10071ccff5b66df95c 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6pT 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b2828ef0c483e10071ccff5b66df95c 0 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b2828ef0c483e10071ccff5b66df95c 0 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b2828ef0c483e10071ccff5b66df95c 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6pT 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6pT 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6pT 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=502a1335f714451a0669c24c60a5e62942916dfb88f6c12f8d35a35492da0bb1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KvQ 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 502a1335f714451a0669c24c60a5e62942916dfb88f6c12f8d35a35492da0bb1 3 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 502a1335f714451a0669c24c60a5e62942916dfb88f6c12f8d35a35492da0bb1 3 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=502a1335f714451a0669c24c60a5e62942916dfb88f6c12f8d35a35492da0bb1 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KvQ 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KvQ 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KvQ 00:34:25.754 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1180193 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180193 ']' 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.my4 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0Re ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0Re 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.PPY 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gjB ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gjB 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7hH 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3Lk ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Lk 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.E0i 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6pT ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6pT 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KvQ 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.014 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:26.273 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:26.274 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:26.274 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:26.274 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:26.274 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:26.274 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:26.274 16:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:28.804 Waiting for block devices as requested 00:34:28.804 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:29.062 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:29.062 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:29.062 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:29.062 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:29.321 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:29.321 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:29.321 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:29.321 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:29.579 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:29.579 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:29.579 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:29.837 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:29.837 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:29.837 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:29.837 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:30.095 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:30.663 No valid GPT data, bailing 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:30.663 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:30.923 00:34:30.923 Discovery Log Number of Records 2, Generation counter 2 00:34:30.923 =====Discovery Log Entry 0====== 00:34:30.923 trtype: tcp 00:34:30.923 adrfam: ipv4 00:34:30.923 subtype: current discovery subsystem 00:34:30.923 treq: not specified, sq flow control disable supported 00:34:30.923 portid: 1 00:34:30.923 trsvcid: 4420 00:34:30.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:30.923 traddr: 10.0.0.1 00:34:30.923 eflags: none 00:34:30.923 sectype: none 00:34:30.923 =====Discovery Log Entry 1====== 00:34:30.923 trtype: tcp 00:34:30.923 adrfam: ipv4 00:34:30.923 subtype: nvme subsystem 00:34:30.923 treq: not specified, sq flow control disable supported 00:34:30.923 portid: 1 00:34:30.923 trsvcid: 4420 00:34:30.923 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:30.923 traddr: 10.0.0.1 00:34:30.923 eflags: none 00:34:30.923 sectype: none 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 nvme0n1 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.923 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.182 nvme0n1 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.182 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.183 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.442 nvme0n1 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.442 16:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.701 nvme0n1 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.701 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.960 nvme0n1 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.960 nvme0n1 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.960 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.961 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.961 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.219 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.220 nvme0n1 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.220 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.479 16:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.479 nvme0n1 00:34:32.479 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.479 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.479 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.479 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.479 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.479 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 nvme0n1 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.997 nvme0n1 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.997 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.256 nvme0n1 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.256 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.515 16:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 nvme0n1 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.774 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.775 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.034 nvme0n1 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.034 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.293 nvme0n1 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.293 16:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.552 nvme0n1 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:34.552 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.811 nvme0n1 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.811 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.070 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.329 nvme0n1 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.329 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.588 16:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.913 nvme0n1 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:35.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.914 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.201 nvme0n1 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.201 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:36.202 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.461 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.720 nvme0n1 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.720 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.288 nvme0n1 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.288 16:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.856 nvme0n1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.856 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 nvme0n1 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.427 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 nvme0n1 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.995 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.254 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.822 nvme0n1 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.822 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.823 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.391 nvme0n1 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.391 16:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.650 nvme0n1 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.650 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.651 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 nvme0n1 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 nvme0n1 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.910 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.169 nvme0n1 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.169 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.170 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.428 nvme0n1 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.428 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:41.429 16:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.429 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.688 nvme0n1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.688 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.947 nvme0n1 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.947 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.948 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.948 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.948 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.207 nvme0n1 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.207 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.466 nvme0n1 00:34:42.466 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.466 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.466 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.466 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.466 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.466 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.466 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.466 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.466 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.467 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.726 nvme0n1 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.726 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.985 nvme0n1 00:34:42.985 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.985 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.985 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.985 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.985 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.985 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.244 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.245 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.245 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.245 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.245 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.504 nvme0n1 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.504 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.763 nvme0n1 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.763 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 nvme0n1 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.022 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.281 nvme0n1 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.281 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.541 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.800 nvme0n1 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.800 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.060 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.319 nvme0n1 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.319 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.887 nvme0n1 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.887 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.147 nvme0n1 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.147 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.406 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.665 nvme0n1 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.665 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.232 nvme0n1 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.232 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.491 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.491 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.058 nvme0n1 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.058 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.059 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.627 nvme0n1 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.627 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.195 nvme0n1 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.195 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.196 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.196 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.196 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.763 nvme0n1 00:34:49.763 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.763 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.763 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.763 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.763 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.763 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:50.022 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.023 nvme0n1 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.023 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.282 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.283 nvme0n1 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.283 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.542 nvme0n1 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.542 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.802 nvme0n1 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.802 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.061 nvme0n1 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.061 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.062 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.321 nvme0n1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.321 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.580 nvme0n1 00:34:51.580 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.580 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.580 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.580 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.580 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.580 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:51.580 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.581 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.840 nvme0n1 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.840 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.099 nvme0n1 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.099 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.100 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.100 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.100 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.100 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.100 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.100 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.359 nvme0n1 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.359 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.618 nvme0n1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.618 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.876 nvme0n1 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.877 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.136 nvme0n1 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.136 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.395 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.654 nvme0n1 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.654 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.655 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.914 nvme0n1 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.914 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.482 nvme0n1 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.482 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.483 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.742 nvme0n1 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.742 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.323 nvme0n1 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.323 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.324 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.587 nvme0n1 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.587 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.845 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.846 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.104 nvme0n1 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEyNzZhMWY4M2NmN2I5ODhjYjg2NDU1MzVmZTFkOTmc445I: 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDEzNzJmMDFmZDZhM2U1ZGE4N2Q1ZTFhYzAzZmJiNzU1MzU1MWY2MDAyNzBmODJjODQ1MjI4ZGMzMzk5NzIzYh81Z9k=: 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.104 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.672 nvme0n1 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.672 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.930 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.498 nvme0n1 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.498 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.065 nvme0n1 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.065 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGM3MmNkODEzYTU4OTM1NThlYTllZDMwZmQ4OGJjMGViODc0YzNkMDg4NTY1NDZh4E/jlA==: 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: ]] 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmIyODI4ZWYwYzQ4M2UxMDA3MWNjZmY1YjY2ZGY5NWNfwrOU: 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.066 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.633 nvme0n1 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTAyYTEzMzVmNzE0NDUxYTA2NjljMjRjNjBhNWU2Mjk0MjkxNmRmYjg4ZjZjMTJmOGQzNWEzNTQ5MmRhMGJiMfXZvA0=: 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.633 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.200 nvme0n1 00:34:59.200 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.200 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.200 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.200 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.200 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.460 request: 00:34:59.460 { 00:34:59.460 "name": "nvme0", 00:34:59.460 "trtype": "tcp", 00:34:59.460 "traddr": "10.0.0.1", 00:34:59.460 "adrfam": "ipv4", 00:34:59.460 "trsvcid": "4420", 00:34:59.460 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:59.460 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:59.460 "prchk_reftag": false, 00:34:59.460 "prchk_guard": false, 00:34:59.460 "hdgst": false, 00:34:59.460 "ddgst": false, 00:34:59.460 "allow_unrecognized_csi": false, 00:34:59.460 "method": "bdev_nvme_attach_controller", 00:34:59.460 "req_id": 1 00:34:59.460 } 00:34:59.460 Got JSON-RPC error response 00:34:59.460 response: 00:34:59.460 { 00:34:59.460 "code": -5, 00:34:59.460 "message": "Input/output error" 00:34:59.460 } 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:59.460 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.461 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.461 request: 00:34:59.461 { 00:34:59.461 "name": "nvme0", 00:34:59.461 "trtype": "tcp", 00:34:59.461 "traddr": "10.0.0.1", 00:34:59.461 "adrfam": "ipv4", 00:34:59.461 "trsvcid": "4420", 00:34:59.461 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:59.461 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:59.461 "prchk_reftag": false, 00:34:59.461 "prchk_guard": false, 00:34:59.461 "hdgst": false, 00:34:59.461 "ddgst": false, 00:34:59.461 "dhchap_key": "key2", 00:34:59.461 "allow_unrecognized_csi": false, 00:34:59.461 "method": "bdev_nvme_attach_controller", 00:34:59.461 "req_id": 1 00:34:59.461 } 00:34:59.461 Got JSON-RPC error response 00:34:59.461 response: 00:34:59.461 { 00:34:59.461 "code": -5, 00:34:59.461 "message": "Input/output error" 00:34:59.461 } 00:34:59.461 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:59.461 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:59.461 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.461 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.461 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:59.720 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.721 request: 00:34:59.721 { 00:34:59.721 "name": "nvme0", 00:34:59.721 "trtype": "tcp", 00:34:59.721 "traddr": "10.0.0.1", 00:34:59.721 "adrfam": "ipv4", 00:34:59.721 "trsvcid": "4420", 00:34:59.721 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:59.721 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:59.721 "prchk_reftag": false, 00:34:59.721 "prchk_guard": false, 00:34:59.721 "hdgst": false, 00:34:59.721 "ddgst": false, 00:34:59.721 "dhchap_key": "key1", 00:34:59.721 "dhchap_ctrlr_key": "ckey2", 00:34:59.721 "allow_unrecognized_csi": false, 00:34:59.721 "method": "bdev_nvme_attach_controller", 00:34:59.721 "req_id": 1 00:34:59.721 } 00:34:59.721 Got JSON-RPC error response 00:34:59.721 response: 00:34:59.721 { 00:34:59.721 "code": -5, 00:34:59.721 "message": "Input/output error" 00:34:59.721 } 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.721 nvme0n1 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.721 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.980 request: 00:34:59.980 { 00:34:59.980 "name": "nvme0", 00:34:59.980 "dhchap_key": "key1", 00:34:59.980 "dhchap_ctrlr_key": "ckey2", 00:34:59.980 "method": "bdev_nvme_set_keys", 00:34:59.980 "req_id": 1 00:34:59.980 } 00:34:59.980 Got JSON-RPC error response 00:34:59.980 response: 00:34:59.980 { 00:34:59.980 "code": -13, 00:34:59.980 "message": "Permission denied" 00:34:59.980 } 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:59.980 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:01.357 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IyYjM3ZmQ2ZTc4ZjcxNTBiODQxMDUyY2Q0NjI5MTM2MWYwZTRlMjU3NTRmNmUy8mT5Tg==: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: ]] 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTQwYTE4MmUxNDU2OTUzNTNjNDg4ZDFkOWFjMzNhMTNhMDZiMWFiNGRmM2ZiYWJmMiDW1A==: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.295 nvme0n1 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDIxMmRkNmI4YTkxNjZiOWRlODk3MDliOTcyZjc1MzBl96Qi: 00:35:02.295 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: ]] 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk2MGIzZDgwMTZjMTAxMTRiZjA5YTYzM2M1ZjgwMTMF69k+: 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.296 request: 00:35:02.296 { 00:35:02.296 "name": "nvme0", 00:35:02.296 "dhchap_key": "key2", 00:35:02.296 "dhchap_ctrlr_key": "ckey1", 00:35:02.296 "method": "bdev_nvme_set_keys", 00:35:02.296 "req_id": 1 00:35:02.296 } 00:35:02.296 Got JSON-RPC error response 00:35:02.296 response: 00:35:02.296 { 00:35:02.296 "code": -13, 00:35:02.296 "message": "Permission denied" 00:35:02.296 } 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.296 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.555 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:02.555 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:03.492 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:03.492 rmmod nvme_tcp 00:35:03.492 rmmod nvme_fabrics 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1180193 ']' 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1180193 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1180193 ']' 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1180193 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180193 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180193' 00:35:03.492 killing process with pid 1180193 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1180193 00:35:03.492 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1180193 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.752 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.657 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:05.917 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:09.208 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:09.208 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:09.467 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:09.726 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.my4 /tmp/spdk.key-null.PPY /tmp/spdk.key-sha256.7hH /tmp/spdk.key-sha384.E0i /tmp/spdk.key-sha512.KvQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:09.726 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:12.262 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:12.262 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:12.262 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:12.522 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:12.522 00:35:12.522 real 0m53.603s 00:35:12.522 user 0m48.367s 00:35:12.522 sys 0m12.521s 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.522 ************************************ 00:35:12.522 END TEST nvmf_auth_host 00:35:12.522 ************************************ 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.522 ************************************ 00:35:12.522 START TEST nvmf_digest 00:35:12.522 ************************************ 00:35:12.522 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:12.782 * Looking for test storage... 00:35:12.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:12.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.782 --rc genhtml_branch_coverage=1 00:35:12.782 --rc genhtml_function_coverage=1 00:35:12.782 --rc genhtml_legend=1 00:35:12.782 --rc geninfo_all_blocks=1 00:35:12.782 --rc geninfo_unexecuted_blocks=1 00:35:12.782 00:35:12.782 ' 00:35:12.782 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:12.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.782 --rc genhtml_branch_coverage=1 00:35:12.782 --rc genhtml_function_coverage=1 00:35:12.782 --rc genhtml_legend=1 00:35:12.783 --rc geninfo_all_blocks=1 00:35:12.783 --rc geninfo_unexecuted_blocks=1 00:35:12.783 00:35:12.783 ' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:12.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.783 --rc genhtml_branch_coverage=1 00:35:12.783 --rc genhtml_function_coverage=1 00:35:12.783 --rc genhtml_legend=1 00:35:12.783 --rc geninfo_all_blocks=1 00:35:12.783 --rc geninfo_unexecuted_blocks=1 00:35:12.783 00:35:12.783 ' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:12.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.783 --rc genhtml_branch_coverage=1 00:35:12.783 --rc genhtml_function_coverage=1 00:35:12.783 --rc genhtml_legend=1 00:35:12.783 --rc geninfo_all_blocks=1 00:35:12.783 --rc geninfo_unexecuted_blocks=1 00:35:12.783 00:35:12.783 ' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:12.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:12.783 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.356 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.356 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:19.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:19.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:19.357 Found net devices under 0000:af:00.0: cvl_0_0 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:19.357 Found net devices under 0000:af:00.1: cvl_0_1 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.357 16:41:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:35:19.357 00:35:19.357 --- 10.0.0.2 ping statistics --- 00:35:19.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.357 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:35:19.357 00:35:19.357 --- 10.0.0.1 ping statistics --- 00:35:19.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.357 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.357 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.357 ************************************ 00:35:19.358 START TEST nvmf_digest_clean 00:35:19.358 ************************************ 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1193675 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1193675 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193675 ']' 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.358 [2024-12-16 16:41:07.242442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:19.358 [2024-12-16 16:41:07.242484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.358 [2024-12-16 16:41:07.322319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.358 [2024-12-16 16:41:07.343393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.358 [2024-12-16 16:41:07.343426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.358 [2024-12-16 16:41:07.343433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.358 [2024-12-16 16:41:07.343438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.358 [2024-12-16 16:41:07.343443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.358 [2024-12-16 16:41:07.343924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.358 null0 00:35:19.358 [2024-12-16 16:41:07.509998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.358 [2024-12-16 16:41:07.534185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1193695 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1193695 /var/tmp/bperf.sock 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193695 ']' 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:19.358 [2024-12-16 16:41:07.586543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:19.358 [2024-12-16 16:41:07.586580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193695 ] 00:35:19.358 [2024-12-16 16:41:07.660790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.358 [2024-12-16 16:41:07.683228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:19.358 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:19.617 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.617 16:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.879 nvme0n1 00:35:19.879 16:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:19.879 16:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.879 Running I/O for 2 seconds... 00:35:21.886 25787.00 IOPS, 100.73 MiB/s [2024-12-16T15:41:10.495Z] 25248.50 IOPS, 98.63 MiB/s 00:35:21.886 Latency(us) 00:35:21.886 [2024-12-16T15:41:10.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.886 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:21.886 nvme0n1 : 2.01 25252.74 98.64 0.00 0.00 5063.74 2559.02 11734.06 00:35:21.886 [2024-12-16T15:41:10.495Z] =================================================================================================================== 00:35:21.886 [2024-12-16T15:41:10.495Z] Total : 25252.74 98.64 0.00 0.00 5063.74 2559.02 11734.06 00:35:21.886 { 00:35:21.886 "results": [ 00:35:21.886 { 00:35:21.886 "job": "nvme0n1", 00:35:21.886 "core_mask": "0x2", 00:35:21.886 "workload": "randread", 00:35:21.886 "status": "finished", 00:35:21.886 "queue_depth": 128, 00:35:21.886 "io_size": 4096, 00:35:21.886 "runtime": 2.006475, 00:35:21.886 "iops": 25252.74424052131, 00:35:21.886 "mibps": 98.64353218953637, 00:35:21.886 "io_failed": 0, 00:35:21.886 "io_timeout": 0, 00:35:21.886 "avg_latency_us": 5063.73829505972, 00:35:21.886 "min_latency_us": 2559.024761904762, 00:35:21.886 "max_latency_us": 11734.064761904761 00:35:21.886 } 00:35:21.886 ], 00:35:21.886 "core_count": 1 00:35:21.886 } 00:35:21.886 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:21.886 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:21.886 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:21.886 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:21.886 | select(.opcode=="crc32c") 00:35:21.886 | "\(.module_name) \(.executed)"' 00:35:21.886 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1193695 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193695 ']' 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193695 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193695 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193695' 00:35:22.146 killing process with pid 1193695 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193695 00:35:22.146 Received shutdown signal, test time was about 2.000000 seconds 00:35:22.146 00:35:22.146 Latency(us) 00:35:22.146 [2024-12-16T15:41:10.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.146 [2024-12-16T15:41:10.755Z] =================================================================================================================== 00:35:22.146 [2024-12-16T15:41:10.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:22.146 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193695 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194159 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194159 /var/tmp/bperf.sock 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194159 ']' 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.405 16:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:22.405 [2024-12-16 16:41:10.917044] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:22.405 [2024-12-16 16:41:10.917091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194159 ] 00:35:22.405 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.405 Zero copy mechanism will not be used. 00:35:22.405 [2024-12-16 16:41:10.992668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.405 [2024-12-16 16:41:11.012305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.665 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.665 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:22.665 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:22.665 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:22.665 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:22.922 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.922 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.181 nvme0n1 00:35:23.181 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:23.181 16:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:23.181 Zero copy mechanism will not be used. 00:35:23.181 Running I/O for 2 seconds... 00:35:25.125 6232.00 IOPS, 779.00 MiB/s [2024-12-16T15:41:13.993Z] 6034.50 IOPS, 754.31 MiB/s 00:35:25.384 Latency(us) 00:35:25.384 [2024-12-16T15:41:13.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.384 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:25.384 nvme0n1 : 2.00 6035.68 754.46 0.00 0.00 2648.12 546.13 7115.34 00:35:25.384 [2024-12-16T15:41:13.993Z] =================================================================================================================== 00:35:25.384 [2024-12-16T15:41:13.993Z] Total : 6035.68 754.46 0.00 0.00 2648.12 546.13 7115.34 00:35:25.384 { 00:35:25.384 "results": [ 00:35:25.384 { 00:35:25.384 "job": "nvme0n1", 00:35:25.384 "core_mask": "0x2", 00:35:25.384 "workload": "randread", 00:35:25.384 "status": "finished", 00:35:25.384 "queue_depth": 16, 00:35:25.384 "io_size": 131072, 00:35:25.384 "runtime": 2.002259, 00:35:25.384 "iops": 6035.682696394422, 00:35:25.384 "mibps": 754.4603370493028, 00:35:25.384 "io_failed": 0, 00:35:25.384 "io_timeout": 0, 00:35:25.384 "avg_latency_us": 2648.1191572393955, 00:35:25.384 "min_latency_us": 546.1333333333333, 00:35:25.384 "max_latency_us": 7115.337142857143 00:35:25.384 } 00:35:25.384 ], 00:35:25.384 "core_count": 1 00:35:25.384 } 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:25.384 | select(.opcode=="crc32c") 00:35:25.384 | "\(.module_name) \(.executed)"' 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194159 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194159 ']' 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194159 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.384 16:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194159 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194159' 00:35:25.643 killing process with pid 1194159 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194159 00:35:25.643 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.643 00:35:25.643 Latency(us) 00:35:25.643 [2024-12-16T15:41:14.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.643 [2024-12-16T15:41:14.252Z] =================================================================================================================== 00:35:25.643 [2024-12-16T15:41:14.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194159 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194767 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194767 /var/tmp/bperf.sock 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194767 ']' 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:25.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.643 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.643 [2024-12-16 16:41:14.222896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:25.643 [2024-12-16 16:41:14.222941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194767 ] 00:35:25.902 [2024-12-16 16:41:14.297626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.902 [2024-12-16 16:41:14.319933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.902 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.902 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:25.902 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:25.902 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:25.902 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:26.162 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.162 16:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.421 nvme0n1 00:35:26.680 16:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:26.680 16:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.680 Running I/O for 2 seconds... 00:35:28.551 28643.00 IOPS, 111.89 MiB/s [2024-12-16T15:41:17.160Z] 28742.00 IOPS, 112.27 MiB/s 00:35:28.551 Latency(us) 00:35:28.551 [2024-12-16T15:41:17.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.551 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.551 nvme0n1 : 2.01 28757.65 112.33 0.00 0.00 4445.21 1802.24 12982.37 00:35:28.551 [2024-12-16T15:41:17.160Z] =================================================================================================================== 00:35:28.551 [2024-12-16T15:41:17.160Z] Total : 28757.65 112.33 0.00 0.00 4445.21 1802.24 12982.37 00:35:28.551 { 00:35:28.551 "results": [ 00:35:28.551 { 00:35:28.551 "job": "nvme0n1", 00:35:28.551 "core_mask": "0x2", 00:35:28.551 "workload": "randwrite", 00:35:28.551 "status": "finished", 00:35:28.551 "queue_depth": 128, 00:35:28.551 "io_size": 4096, 00:35:28.551 "runtime": 2.005588, 00:35:28.551 "iops": 28757.651122763, 00:35:28.551 "mibps": 112.33457469829297, 00:35:28.551 "io_failed": 0, 00:35:28.551 "io_timeout": 0, 00:35:28.551 "avg_latency_us": 4445.207608215352, 00:35:28.551 "min_latency_us": 1802.24, 00:35:28.551 "max_latency_us": 12982.369523809524 00:35:28.551 } 00:35:28.551 ], 00:35:28.551 "core_count": 1 00:35:28.551 } 00:35:28.551 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:28.552 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:28.552 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:28.552 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:28.552 | select(.opcode=="crc32c") 00:35:28.552 | "\(.module_name) \(.executed)"' 00:35:28.552 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194767 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194767 ']' 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194767 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194767 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194767' 00:35:28.811 killing process with pid 1194767 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194767 00:35:28.811 Received shutdown signal, test time was about 2.000000 seconds 00:35:28.811 00:35:28.811 Latency(us) 00:35:28.811 [2024-12-16T15:41:17.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.811 [2024-12-16T15:41:17.420Z] =================================================================================================================== 00:35:28.811 [2024-12-16T15:41:17.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:28.811 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194767 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195291 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195291 /var/tmp/bperf.sock 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195291 ']' 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.071 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.071 [2024-12-16 16:41:17.596022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:29.071 [2024-12-16 16:41:17.596068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195291 ] 00:35:29.071 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:29.071 Zero copy mechanism will not be used. 00:35:29.071 [2024-12-16 16:41:17.671496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.330 [2024-12-16 16:41:17.692941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.330 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.330 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:29.330 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:29.330 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:29.330 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:29.589 16:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.589 16:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.848 nvme0n1 00:35:29.848 16:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:29.848 16:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:30.107 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.107 Zero copy mechanism will not be used. 00:35:30.107 Running I/O for 2 seconds... 00:35:31.981 6373.00 IOPS, 796.62 MiB/s [2024-12-16T15:41:20.590Z] 6413.50 IOPS, 801.69 MiB/s 00:35:31.981 Latency(us) 00:35:31.981 [2024-12-16T15:41:20.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.981 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:31.981 nvme0n1 : 2.00 6408.10 801.01 0.00 0.00 2492.15 1895.86 8613.30 00:35:31.981 [2024-12-16T15:41:20.590Z] =================================================================================================================== 00:35:31.981 [2024-12-16T15:41:20.590Z] Total : 6408.10 801.01 0.00 0.00 2492.15 1895.86 8613.30 00:35:31.981 { 00:35:31.981 "results": [ 00:35:31.981 { 00:35:31.981 "job": "nvme0n1", 00:35:31.981 "core_mask": "0x2", 00:35:31.981 "workload": "randwrite", 00:35:31.981 "status": "finished", 00:35:31.981 "queue_depth": 16, 00:35:31.981 "io_size": 131072, 00:35:31.981 "runtime": 2.004649, 00:35:31.981 "iops": 6408.104361411898, 00:35:31.981 "mibps": 801.0130451764873, 00:35:31.981 "io_failed": 0, 00:35:31.981 "io_timeout": 0, 00:35:31.981 "avg_latency_us": 2492.151032227931, 00:35:31.981 "min_latency_us": 1895.8628571428571, 00:35:31.981 "max_latency_us": 8613.302857142857 00:35:31.981 } 00:35:31.981 ], 00:35:31.981 "core_count": 1 00:35:31.981 } 00:35:31.981 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:31.981 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:31.981 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:31.981 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:31.981 | select(.opcode=="crc32c") 00:35:31.981 | "\(.module_name) \(.executed)"' 00:35:31.981 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195291 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195291 ']' 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195291 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195291 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195291' 00:35:32.240 killing process with pid 1195291 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195291 00:35:32.240 Received shutdown signal, test time was about 2.000000 seconds 00:35:32.240 00:35:32.240 Latency(us) 00:35:32.240 [2024-12-16T15:41:20.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.240 [2024-12-16T15:41:20.849Z] =================================================================================================================== 00:35:32.240 [2024-12-16T15:41:20.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.240 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195291 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1193675 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193675 ']' 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193675 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193675 00:35:32.499 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.500 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.500 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193675' 00:35:32.500 killing process with pid 1193675 00:35:32.500 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193675 00:35:32.500 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193675 00:35:32.759 00:35:32.759 real 0m13.945s 00:35:32.759 user 0m26.777s 00:35:32.759 sys 0m4.513s 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 ************************************ 00:35:32.759 END TEST nvmf_digest_clean 00:35:32.759 ************************************ 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 ************************************ 00:35:32.759 START TEST nvmf_digest_error 00:35:32.759 ************************************ 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1195876 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1195876 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1195876 ']' 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.759 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 [2024-12-16 16:41:21.260529] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:32.759 [2024-12-16 16:41:21.260566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.759 [2024-12-16 16:41:21.323972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.759 [2024-12-16 16:41:21.345733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.759 [2024-12-16 16:41:21.345765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.759 [2024-12-16 16:41:21.345773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.759 [2024-12-16 16:41:21.345779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.759 [2024-12-16 16:41:21.345784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.759 [2024-12-16 16:41:21.346312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.019 [2024-12-16 16:41:21.454853] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.019 null0 00:35:33.019 [2024-12-16 16:41:21.544729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.019 [2024-12-16 16:41:21.568917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196011 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196011 /var/tmp/bperf.sock 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196011 ']' 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:33.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.019 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.019 [2024-12-16 16:41:21.619431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:33.019 [2024-12-16 16:41:21.619469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196011 ] 00:35:33.279 [2024-12-16 16:41:21.692812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.279 [2024-12-16 16:41:21.714735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.279 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.279 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:33.279 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:33.279 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:33.538 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:33.538 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.538 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.538 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.538 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:33.538 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:33.797 nvme0n1 00:35:33.797 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:33.797 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.797 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.797 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.797 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:33.797 16:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:33.797 Running I/O for 2 seconds... 00:35:33.797 [2024-12-16 16:41:22.400295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:33.797 [2024-12-16 16:41:22.400328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.797 [2024-12-16 16:41:22.400338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.056 [2024-12-16 16:41:22.412078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.056 [2024-12-16 16:41:22.412107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.412117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.424063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.424084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.424093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.432406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.432427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.432435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.443250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.443271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.443279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.451801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.451821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.451830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.461515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.461536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.461545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.470237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.470260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.470269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.480720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.480742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.480751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.490656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.490677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.490685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.498665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.498685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.498693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.508283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.508303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.508311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.517536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.517555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.517563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.526580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.526599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.526612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.537082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.537109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.537117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.546247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.546275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.546283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.555793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.555813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.555821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.565439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.565460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.565469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.573898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.573918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.573926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.584875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.584895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.584903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.593260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.593279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.593287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.604664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.604690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.604698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.616723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.616752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.626253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.626272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.626280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.634696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.634715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.634724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.646651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.646671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.646679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.057 [2024-12-16 16:41:22.658501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.057 [2024-12-16 16:41:22.658521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.057 [2024-12-16 16:41:22.658545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.667601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.667623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.667631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.680044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.680065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.680074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.692522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.692543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.692551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.703929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.703949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.703960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.715767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.715787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.715795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.724737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.724758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.724765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.735922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.735951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.745023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.745043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.745051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.754747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.754768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.762910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.762929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.762937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.774395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.774416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.774424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.785682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.785709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.798022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.798048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.798056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.808422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.808442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.808450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.817107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.817127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.817135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.828190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.828210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.828218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.836630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.836650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.836657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.848044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.848063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.848070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.859015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.859035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.859042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.867444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.867464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.867472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.879282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.879301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.879309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.892651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.317 [2024-12-16 16:41:22.892670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.317 [2024-12-16 16:41:22.892678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.317 [2024-12-16 16:41:22.900964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.318 [2024-12-16 16:41:22.900983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.318 [2024-12-16 16:41:22.900991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.318 [2024-12-16 16:41:22.912033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.318 [2024-12-16 16:41:22.912053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.318 [2024-12-16 16:41:22.912061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.924345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.924364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.924372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.933567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.933586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.933594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.943133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.943152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.943160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.953395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.953414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.953422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.965778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.965798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.965806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.977372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.977391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.977403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.989007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.989027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.989034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:22.998299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:22.998318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:22.998326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.577 [2024-12-16 16:41:23.010151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.577 [2024-12-16 16:41:23.010171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.577 [2024-12-16 16:41:23.010179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.018829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.018848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.018856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.030461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.030481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.030489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.039724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.039743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.050329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.050348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.050356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.059128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.059148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.059156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.070333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.070356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.070364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.081478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.081498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.081506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.089910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.089928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.089936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.100376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.100395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.100403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.111539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.111558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.111565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.123229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.123248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.123257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.135110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.135128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.135136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.146978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.146997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.147005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.160066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.160086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.160098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.170854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.170873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.170881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.578 [2024-12-16 16:41:23.180098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.578 [2024-12-16 16:41:23.180119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.578 [2024-12-16 16:41:23.180127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.192492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.192512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.192520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.202536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.202555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.202563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.211739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.211759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.211767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.221079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.221112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.229762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.229782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.229790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.239991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.240011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.240019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.253174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.253194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.253205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.265503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.265523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.265531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.273828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.273848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.273856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.285307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.285327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.285336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.297474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.297495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.297504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.310344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.310366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.310374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.322361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.322382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.322391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.335350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.335378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.343517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.343536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.343544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.355684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.355704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.355712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.367810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.367830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.367839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.379843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.379864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.379872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 24197.00 IOPS, 94.52 MiB/s [2024-12-16T15:41:23.447Z] [2024-12-16 16:41:23.390682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.390701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.390710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.399034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.399053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.838 [2024-12-16 16:41:23.399061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.838 [2024-12-16 16:41:23.411079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.838 [2024-12-16 16:41:23.411104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.839 [2024-12-16 16:41:23.411112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.839 [2024-12-16 16:41:23.423280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.839 [2024-12-16 16:41:23.423299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.839 [2024-12-16 16:41:23.423307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:34.839 [2024-12-16 16:41:23.436255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:34.839 [2024-12-16 16:41:23.436275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.839 [2024-12-16 16:41:23.436283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.448831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.448851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.448862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.457042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.457062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.466044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.466064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.475172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.475191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.475199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.485481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.485502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.485509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.495340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.495360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.495369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.503813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.503834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.503842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.513522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.513541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.513549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.522817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.522837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.522844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.533057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.533080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.533088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.541330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.541349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.541357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.551314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.551333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.551341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.559672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.559690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.559698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.569831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.569851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.569859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.581556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.581576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.590039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.590058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.590066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.600366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.600387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.600394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.609550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.609569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.609577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.619355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.619374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.619382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.627791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.627809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.627817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.636959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.636978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.636985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.648800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.648819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.648827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.660191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.660210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.660218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.672449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.672469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.672477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.684562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.099 [2024-12-16 16:41:23.684581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.099 [2024-12-16 16:41:23.684589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.099 [2024-12-16 16:41:23.692890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.100 [2024-12-16 16:41:23.692909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.100 [2024-12-16 16:41:23.692917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.100 [2024-12-16 16:41:23.705107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.100 [2024-12-16 16:41:23.705126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.100 [2024-12-16 16:41:23.705138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.359 [2024-12-16 16:41:23.717304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.359 [2024-12-16 16:41:23.717324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.359 [2024-12-16 16:41:23.717332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.359 [2024-12-16 16:41:23.727780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.359 [2024-12-16 16:41:23.727799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.359 [2024-12-16 16:41:23.727807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.359 [2024-12-16 16:41:23.735693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.359 [2024-12-16 16:41:23.735712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.359 [2024-12-16 16:41:23.735720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.359 [2024-12-16 16:41:23.745607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.359 [2024-12-16 16:41:23.745627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.359 [2024-12-16 16:41:23.745635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.755223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.755242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.755250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.766454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.766474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.766481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.774940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.774960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.774967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.785999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.786019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.786027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.796805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.796828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.796836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.804559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.804578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.804587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.816040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.816068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.826154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.826174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.826182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.835394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.835413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.835421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.846781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.846799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.846807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.857040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.857059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.857067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.865309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.865329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.865337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.874976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.874996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.875004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.884835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.884856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.884864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.893308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.893329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.893336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.902650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.902670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.902678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.912054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.912073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.912081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.921276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.921296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.921304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.931569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.931589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.940979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.940999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.941006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.950183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.950203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.360 [2024-12-16 16:41:23.959160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.360 [2024-12-16 16:41:23.959183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.360 [2024-12-16 16:41:23.959192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.620 [2024-12-16 16:41:23.967703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.620 [2024-12-16 16:41:23.967725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.620 [2024-12-16 16:41:23.967733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.620 [2024-12-16 16:41:23.977102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.620 [2024-12-16 16:41:23.977122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.620 [2024-12-16 16:41:23.977130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.620 [2024-12-16 16:41:23.987119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.620 [2024-12-16 16:41:23.987139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.620 [2024-12-16 16:41:23.987147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:23.998334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:23.998353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:23.998362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.007486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.007505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.007513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.016577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.016597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.016604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.025468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.025487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.025495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.036649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.036669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.036676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.048700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.048729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.060962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.060982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.060990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.069141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.069162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.069170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.080918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.080947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.089366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.089386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.089394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.100728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.100749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.100757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.110239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.110264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.110272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.119696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.119715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.119723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.128527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.128546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.128560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.137384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.137403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.137411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.147281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.147301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.156258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.156285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.166648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.166667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.166675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.174812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.174831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.174839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.186028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.186049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.186057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.198408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.198428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.198436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.209304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.209323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.209330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.621 [2024-12-16 16:41:24.218089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.621 [2024-12-16 16:41:24.218119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.621 [2024-12-16 16:41:24.218127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.229061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.229082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.229090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.237337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.237358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.237366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.249892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.249913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.249921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.259801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.259829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.271469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.271489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.271497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.282007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.282027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.282035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.293432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.293453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.293461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.302669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.302688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.302696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.312009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.312028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.312036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.320775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.320795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.320803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.329992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.330011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.330019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.340358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.340377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.340384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.349311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.349331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.349339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.360552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.360571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.360579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.371325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.371345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.371353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.380137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.380157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.380165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 [2024-12-16 16:41:24.390228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf466e0) 00:35:35.881 [2024-12-16 16:41:24.390249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.881 [2024-12-16 16:41:24.390260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:35.881 24867.00 IOPS, 97.14 MiB/s 00:35:35.881 Latency(us) 00:35:35.881 [2024-12-16T15:41:24.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.882 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:35.882 nvme0n1 : 2.00 24887.27 97.22 0.00 0.00 5138.18 2371.78 17601.10 00:35:35.882 [2024-12-16T15:41:24.491Z] =================================================================================================================== 00:35:35.882 [2024-12-16T15:41:24.491Z] Total : 24887.27 97.22 0.00 0.00 5138.18 2371.78 17601.10 00:35:35.882 { 00:35:35.882 "results": [ 00:35:35.882 { 00:35:35.882 "job": "nvme0n1", 00:35:35.882 "core_mask": "0x2", 00:35:35.882 "workload": "randread", 00:35:35.882 "status": "finished", 00:35:35.882 "queue_depth": 128, 00:35:35.882 "io_size": 4096, 00:35:35.882 "runtime": 2.003514, 00:35:35.882 "iops": 24887.273061231415, 00:35:35.882 "mibps": 97.21591039543522, 00:35:35.882 "io_failed": 0, 00:35:35.882 "io_timeout": 0, 00:35:35.882 "avg_latency_us": 5138.180165523512, 00:35:35.882 "min_latency_us": 2371.7790476190476, 00:35:35.882 "max_latency_us": 17601.097142857143 00:35:35.882 } 00:35:35.882 ], 00:35:35.882 "core_count": 1 00:35:35.882 } 00:35:35.882 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:35.882 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:35.882 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:35.882 | .driver_specific 00:35:35.882 | .nvme_error 00:35:35.882 | .status_code 00:35:35.882 | .command_transient_transport_error' 00:35:35.882 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196011 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196011 ']' 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196011 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196011 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196011' 00:35:36.141 killing process with pid 1196011 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196011 00:35:36.141 Received shutdown signal, test time was about 2.000000 seconds 00:35:36.141 00:35:36.141 Latency(us) 00:35:36.141 [2024-12-16T15:41:24.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.141 [2024-12-16T15:41:24.750Z] =================================================================================================================== 00:35:36.141 [2024-12-16T15:41:24.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:36.141 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196011 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196471 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196471 /var/tmp/bperf.sock 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196471 ']' 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:36.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.400 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:36.400 [2024-12-16 16:41:24.866278] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:36.401 [2024-12-16 16:41:24.866322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196471 ] 00:35:36.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:36.401 Zero copy mechanism will not be used. 00:35:36.401 [2024-12-16 16:41:24.939912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.401 [2024-12-16 16:41:24.961014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.663 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.233 nvme0n1 00:35:37.233 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:37.233 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.233 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:37.233 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.233 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:37.233 16:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:37.233 Zero copy mechanism will not be used. 00:35:37.233 Running I/O for 2 seconds... 00:35:37.233 [2024-12-16 16:41:25.673271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.673304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.673314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.678910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.678934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.678943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.684288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.684313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.684321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.689875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.689898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.689907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.695183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.695204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.695212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.700310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.700331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.700339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.705538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.705559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.705566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.710779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.710801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.710809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.715958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.715980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.715988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.721189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.721210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.721218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.726375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.726396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.726404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.731528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.731549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.731557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.736672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.736693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.736701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.741885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.741905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.741913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.747071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.747091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.747107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.752273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.752294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.752305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.757442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.757463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.757470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.763004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.763025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.763033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.768735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.768757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.768765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.773929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.773949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.773957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.779041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.779062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.779069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.784138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.784159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.233 [2024-12-16 16:41:25.784167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.233 [2024-12-16 16:41:25.789162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.233 [2024-12-16 16:41:25.789182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.789190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.791935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.791963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.797184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.797207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.797214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.802503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.802523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.802530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.807645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.807665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.807673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.812806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.812835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.818074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.818102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.818111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.823136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.823156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.823163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.828218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.828239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.828246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.833356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.833376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.833383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.234 [2024-12-16 16:41:25.838486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.234 [2024-12-16 16:41:25.838507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.234 [2024-12-16 16:41:25.838515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.843645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.843666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.843674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.848743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.848763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.848770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.853904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.853924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.853932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.859020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.859040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.859047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.864176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.864196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.864204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.869246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.869267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.869275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.874078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.874105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.874114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.879123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.879143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.879151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.884275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.884295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.884309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.889408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.889428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.894546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.894566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.894574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.899711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.899731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.899738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.904846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.904866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.904873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.909968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.909988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.909996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.915078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.915104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.915112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.920204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.920224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.920232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.925348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.494 [2024-12-16 16:41:25.925368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.494 [2024-12-16 16:41:25.925375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.494 [2024-12-16 16:41:25.930480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.930505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.930513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.935616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.935636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.935644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.940798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.940818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.940826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.945887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.945907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.945914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.951005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.951025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.951033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.956169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.956189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.956196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.961304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.961324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.961332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.966448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.966470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.966479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.971560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.971580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.971588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.976682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.976703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.976710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.981819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.981839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.981847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.986730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.986750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.986758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.992595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.992617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.992624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:25.998308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:25.998328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:25.998335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.004286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.004307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.004315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.010962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.010983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.010991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.018564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.018586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.018594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.024433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.024454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.024465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.028258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.028278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.028286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.034696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.034716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.034724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.039941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.039960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.039968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.045240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.045260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.045268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.051078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.051104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.051112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.058331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.058351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.058359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.065901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.065921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.065929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.072810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.072838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.079099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.079120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.079128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.084633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.495 [2024-12-16 16:41:26.084653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.495 [2024-12-16 16:41:26.084660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.495 [2024-12-16 16:41:26.089814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.496 [2024-12-16 16:41:26.089834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.496 [2024-12-16 16:41:26.089841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.496 [2024-12-16 16:41:26.095726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.496 [2024-12-16 16:41:26.095746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.496 [2024-12-16 16:41:26.095754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.103004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.103023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.103032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.109923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.109943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.109952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.116079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.116104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.116112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.122124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.122143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.122151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.128339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.128359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.128370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.134228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.134248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.134256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.139926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.139947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.139955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.145479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.145500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.145508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.150695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.150715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.150723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.155797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.155817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.155825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.160905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.160925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.160932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.166064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.166085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.171350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.171371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.176575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.176599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.176609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.181844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.181864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.181873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.186982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.187003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.187010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.192241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.192262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.192270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.197489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.197509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.197517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.202664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.202684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.202692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.207895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.207915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.207922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.213186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.213206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.213213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.218338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.218358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.218366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.223446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.223466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.223474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.228634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.756 [2024-12-16 16:41:26.228654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.756 [2024-12-16 16:41:26.228662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.756 [2024-12-16 16:41:26.233741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.233761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.233769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.238976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.238996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.239004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.244446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.244466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.244474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.249670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.249690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.249698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.254842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.254862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.254870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.260011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.260031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.260039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.265204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.265223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.265234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.270310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.270330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.270338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.275513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.275531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.275539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.280325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.280345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.280353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.285464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.285484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.285491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.290603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.290624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.290632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.295706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.295725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.295732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.300466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.300486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.300494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.305580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.305601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.305609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.310677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.310701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.310708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.315832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.315851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.315859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.320981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.321001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.326132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.326152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.326160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.331176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.331196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.331204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.336170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.336190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.336199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.341268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.341288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.346401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.346421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.346429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.351565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.351586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.351594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.356653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.356673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.356681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:37.757 [2024-12-16 16:41:26.361892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:37.757 [2024-12-16 16:41:26.361913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.757 [2024-12-16 16:41:26.361921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.017 [2024-12-16 16:41:26.367113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.017 [2024-12-16 16:41:26.367133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.017 [2024-12-16 16:41:26.367141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.017 [2024-12-16 16:41:26.372262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.372281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.372289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.377563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.377584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.377592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.382632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.382652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.382660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.387714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.387735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.387742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.392807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.392827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.392835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.397918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.397939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.397950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.403068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.403088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.403102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.408203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.408224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.408232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.413315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.413335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.413343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.418384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.418405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.418412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.423486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.423506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.423514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.428580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.428600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.428607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.433734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.433755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.433763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.438886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.438906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.438914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.444031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.444055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.444063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.449182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.449202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.449210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.454319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.454339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.454347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.459449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.459469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.459477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.464579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.464600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.464607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.469735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.469755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.469763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.474851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.474871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.474879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.479950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.479971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.479978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.485087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.485113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.485120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.490199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.490221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.490228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.495415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.495435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.495443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.500563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.500583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.500591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.505672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.505692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.018 [2024-12-16 16:41:26.505700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.018 [2024-12-16 16:41:26.510665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.018 [2024-12-16 16:41:26.510684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.510692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.513496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.513523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.519517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.519539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.519547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.524656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.524677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.524684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.529853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.529873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.529884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.535018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.535037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.535045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.540114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.540134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.540143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.545279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.545298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.545307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.550403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.550423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.550431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.555440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.555460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.555468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.560640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.560660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.560668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.565826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.565846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.565854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.571057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.571078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.571086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.576272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.576294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.576302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.581422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.581442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.581450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.586670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.586690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.586698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.591924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.591944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.591952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.597121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.597141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.597148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.602334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.602353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.602361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.607382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.607402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.607409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.612616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.612636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.612644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.617796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.617816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.617827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.019 [2024-12-16 16:41:26.623039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.019 [2024-12-16 16:41:26.623060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.019 [2024-12-16 16:41:26.623067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.628255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.628276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.628283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.633218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.633239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.633246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.638387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.638408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.638416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.643518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.643538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.643545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.648745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.648764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.648772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.653902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.653923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.653931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.659061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.659082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.659089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.664202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.664225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.664233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.279 5850.00 IOPS, 731.25 MiB/s [2024-12-16T15:41:26.888Z] [2024-12-16 16:41:26.670815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.670836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.670843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.676156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.676177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.676185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.681390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.681410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.279 [2024-12-16 16:41:26.681418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.279 [2024-12-16 16:41:26.686713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.279 [2024-12-16 16:41:26.686733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.686740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.691882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.691903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.691911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.697022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.697044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.697052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.702227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.702248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.702256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.707450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.707471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.707478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.712644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.712665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.712673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.717805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.717826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.717834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.722963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.722983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.722990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.728038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.728060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.728067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.733383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.733406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.733414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.738453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.738474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.738482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.743633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.743653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.743661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.748805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.748826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.748834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.753910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.753931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.753942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.759076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.759101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.759109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.764243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.764264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.764272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.769492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.769512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.769520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.774664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.774684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.774692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.779740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.779761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.779768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.784843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.784864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.784872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.790019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.790040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.790048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.795219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.795240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.795248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.800471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.800496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.800504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.805786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.805806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.805814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.811079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.811106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.811114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.816281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.816301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.816309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.821480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.821500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.821508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.826936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.280 [2024-12-16 16:41:26.826956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.280 [2024-12-16 16:41:26.826964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.280 [2024-12-16 16:41:26.832338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.832360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.832367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.837708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.837728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.837736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.842955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.842977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.842984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.847872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.847894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.847901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.852954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.852974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.852982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.858137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.858158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.858166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.863732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.863753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.863760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.867175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.867195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.867202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.871480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.871501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.871508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.876825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.876846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.876853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.281 [2024-12-16 16:41:26.882262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.281 [2024-12-16 16:41:26.882283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.281 [2024-12-16 16:41:26.882292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.887619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.887640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.887651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.892840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.892861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.892868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.898033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.898054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.898062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.903214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.903234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.903241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.908389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.908408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.908415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.913578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.913599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.913607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.918742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.918762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.923932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.923954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.923962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.929105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.929124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.929132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.934306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.934330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.934339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.939527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.939547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.939555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.944745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.944765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.944773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.949914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.949934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.949942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.541 [2024-12-16 16:41:26.955067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.541 [2024-12-16 16:41:26.955088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.541 [2024-12-16 16:41:26.955102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.960299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.960318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.960325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.965389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.965409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.965417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.970595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.970614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.970622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.975775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.975795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.975803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.980951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.980971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.980979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.986168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.986188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.986195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.991403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.991423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.991430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:26.996546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:26.996567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:26.996574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.001658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.001679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.001686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.006789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.006809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.006817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.011973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.011993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.012001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.017210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.017230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.017237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.022353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.022372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.022383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.027457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.027477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.027485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.032540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.032560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.032568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.037661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.037681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.037688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.042728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.042748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.042756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.047761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.047782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.047789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.052961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.052981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.052988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.058192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.058212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.058219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.063370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.063390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.063397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.068594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.068613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.068621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.073689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.073709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.073717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.078748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.542 [2024-12-16 16:41:27.078775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.542 [2024-12-16 16:41:27.083782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.542 [2024-12-16 16:41:27.083802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.083809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.088990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.089010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.089017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.094189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.094209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.094216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.099364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.099384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.099392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.104511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.104531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.104539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.109721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.109742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.109753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.114954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.114974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.114982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.120029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.120049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.120057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.125349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.125370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.125378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.130840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.130860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.130868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.136135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.136154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.136162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.141358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.141378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.141385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.543 [2024-12-16 16:41:27.146628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.543 [2024-12-16 16:41:27.146648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.543 [2024-12-16 16:41:27.146656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.803 [2024-12-16 16:41:27.151826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.803 [2024-12-16 16:41:27.151846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.803 [2024-12-16 16:41:27.151854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.803 [2024-12-16 16:41:27.157024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.803 [2024-12-16 16:41:27.157047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.803 [2024-12-16 16:41:27.157055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.803 [2024-12-16 16:41:27.162416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.803 [2024-12-16 16:41:27.162435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.803 [2024-12-16 16:41:27.162443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.803 [2024-12-16 16:41:27.167765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.167785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.167793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.173018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.173038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.173046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.178377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.178395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.178403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.183693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.183713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.183721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.188954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.188974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.188982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.194275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.194296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.194304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.199698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.199717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.199724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.205011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.205031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.205039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.210339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.210359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.210367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.215651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.215670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.215678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.220687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.220707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.220714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.226238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.226265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.226273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.231590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.231610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.231618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.236851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.236872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.236879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.242179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.242200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.242208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.247730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.247750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.247763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.253000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.253020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.253028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.258261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.258282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.258289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.263560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.263580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.263587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.268886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.268906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.268913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.274137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.274157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.274165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.279644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.279664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.279672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.285063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.285084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.285091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.290367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.290387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.290395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.295842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.295866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.295873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.301273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.301294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.301301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.306467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.306488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.804 [2024-12-16 16:41:27.306496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.804 [2024-12-16 16:41:27.311696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.804 [2024-12-16 16:41:27.311718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.311726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.317135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.317155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.317163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.322606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.322626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.322633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.327878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.327898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.327906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.333145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.333165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.333173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.338342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.338362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.338370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.343644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.343664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.343671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.348988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.349008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.349015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.354429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.354449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.359803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.359823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.359831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.365173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.365194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.365201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.370478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.370498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.370506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.375894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.375915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.375922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.381152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.381172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.381180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.386489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.386510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.386520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.391886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.391906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.391914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.397082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.397108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.397116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.402125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.402145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.402152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.805 [2024-12-16 16:41:27.407304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:38.805 [2024-12-16 16:41:27.407324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.805 [2024-12-16 16:41:27.407332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.065 [2024-12-16 16:41:27.412853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.065 [2024-12-16 16:41:27.412874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.065 [2024-12-16 16:41:27.412881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.065 [2024-12-16 16:41:27.418309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.065 [2024-12-16 16:41:27.418329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.065 [2024-12-16 16:41:27.418336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.065 [2024-12-16 16:41:27.423996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.065 [2024-12-16 16:41:27.424016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.065 [2024-12-16 16:41:27.424024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.065 [2024-12-16 16:41:27.429310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.065 [2024-12-16 16:41:27.429330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.065 [2024-12-16 16:41:27.429339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.065 [2024-12-16 16:41:27.434550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.065 [2024-12-16 16:41:27.434575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.065 [2024-12-16 16:41:27.434582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.065 [2024-12-16 16:41:27.439724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.439744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.439752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.444841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.444862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.444870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.449984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.450004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.450013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.455337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.455357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.455365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.460604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.460624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.460632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.465873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.465893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.465901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.471367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.471388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.476849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.476869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.476877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.482597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.482616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.482624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.487680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.487701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.487708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.492856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.492876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.492884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.498046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.498066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.498074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.503204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.503224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.508423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.508444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.508451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.513696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.513716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.513723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.519002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.519022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.519030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.524573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.524593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.524604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.530161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.530180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.530187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.535493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.535513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.535521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.540804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.540824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.540832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.546275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.546295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.546302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.551548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.551568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.551576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.556789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.556809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.556817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.562091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.562118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.562125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.567374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.567395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.567402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.572657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.572680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.572688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.577938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.577958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.577965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.583164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.066 [2024-12-16 16:41:27.583184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.066 [2024-12-16 16:41:27.583191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.066 [2024-12-16 16:41:27.588498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.588519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.588527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.594142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.594162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.594170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.599218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.599247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.604540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.604560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.604568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.609951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.609972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.609979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.615275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.615296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.615304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.620659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.620679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.620686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.626149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.626169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.626177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.631172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.631192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.631200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.636378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.636399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.636406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.641570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.641591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.641598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.646723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.646743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.646750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.652039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.652059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.652067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.657330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.657351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.657359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.662694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.662715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.662726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:39.067 [2024-12-16 16:41:27.668019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x672130) 00:35:39.067 [2024-12-16 16:41:27.668040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.067 [2024-12-16 16:41:27.668048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:39.326 5875.50 IOPS, 734.44 MiB/s 00:35:39.326 Latency(us) 00:35:39.326 [2024-12-16T15:41:27.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:39.326 nvme0n1 : 2.00 5878.09 734.76 0.00 0.00 2719.10 628.05 11047.50 00:35:39.326 [2024-12-16T15:41:27.935Z] =================================================================================================================== 00:35:39.326 [2024-12-16T15:41:27.935Z] Total : 5878.09 734.76 0.00 0.00 2719.10 628.05 11047.50 00:35:39.326 { 00:35:39.326 "results": [ 00:35:39.326 { 00:35:39.326 "job": "nvme0n1", 00:35:39.326 "core_mask": "0x2", 00:35:39.326 "workload": "randread", 00:35:39.326 "status": "finished", 00:35:39.326 "queue_depth": 16, 00:35:39.326 "io_size": 131072, 00:35:39.326 "runtime": 2.00184, 00:35:39.326 "iops": 5878.0921552172, 00:35:39.326 "mibps": 734.76151940215, 00:35:39.326 "io_failed": 0, 00:35:39.326 "io_timeout": 0, 00:35:39.326 "avg_latency_us": 2719.09591877203, 00:35:39.326 "min_latency_us": 628.0533333333333, 00:35:39.326 "max_latency_us": 11047.497142857143 00:35:39.326 } 00:35:39.326 ], 00:35:39.326 "core_count": 1 00:35:39.326 } 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:39.326 | .driver_specific 00:35:39.326 | .nvme_error 00:35:39.326 | .status_code 00:35:39.326 | .command_transient_transport_error' 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196471 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196471 ']' 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196471 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.326 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196471 00:35:39.585 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:39.585 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:39.585 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196471' 00:35:39.585 killing process with pid 1196471 00:35:39.585 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196471 00:35:39.585 Received shutdown signal, test time was about 2.000000 seconds 00:35:39.585 00:35:39.585 Latency(us) 00:35:39.585 [2024-12-16T15:41:28.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.585 [2024-12-16T15:41:28.194Z] =================================================================================================================== 00:35:39.585 [2024-12-16T15:41:28.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:39.585 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196471 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196931 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196931 /var/tmp/bperf.sock 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196931 ']' 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.585 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.585 [2024-12-16 16:41:28.151834] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:39.585 [2024-12-16 16:41:28.151886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196931 ] 00:35:39.845 [2024-12-16 16:41:28.228435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.845 [2024-12-16 16:41:28.250667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.845 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.845 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:39.845 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:39.845 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:40.103 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:40.103 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.103 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:40.103 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.103 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.103 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.362 nvme0n1 00:35:40.362 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:40.362 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.362 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:40.621 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.621 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:40.621 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:40.621 Running I/O for 2 seconds... 00:35:40.621 [2024-12-16 16:41:29.072515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee23b8 00:35:40.621 [2024-12-16 16:41:29.073818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.073847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.079861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee5a90 00:35:40.621 [2024-12-16 16:41:29.080625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.080646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.089525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efe720 00:35:40.621 [2024-12-16 16:41:29.090421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.090441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.099804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef3a28 00:35:40.621 [2024-12-16 16:41:29.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.100866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.108288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eeff18 00:35:40.621 [2024-12-16 16:41:29.109504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.109524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.116825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee3d08 00:35:40.621 [2024-12-16 16:41:29.117465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.117484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.126043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eec408 00:35:40.621 [2024-12-16 16:41:29.126710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.126733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.135292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eed4e8 00:35:40.621 [2024-12-16 16:41:29.135954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.135973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.144473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eee5c8 00:35:40.621 [2024-12-16 16:41:29.145159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.153932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efb8b8 00:35:40.621 [2024-12-16 16:41:29.154693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.154712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.162679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee49b0 00:35:40.621 [2024-12-16 16:41:29.163426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.163444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.172879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ede470 00:35:40.621 [2024-12-16 16:41:29.173780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.173800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.182368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee1f80 00:35:40.621 [2024-12-16 16:41:29.183403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.183422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.191048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee23b8 00:35:40.621 [2024-12-16 16:41:29.191931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.191950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.199845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef8618 00:35:40.621 [2024-12-16 16:41:29.200718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.200736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.210035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef8e88 00:35:40.621 [2024-12-16 16:41:29.211043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.211062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:40.621 [2024-12-16 16:41:29.219471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee99d8 00:35:40.621 [2024-12-16 16:41:29.220601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.621 [2024-12-16 16:41:29.220619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.228412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee9168 00:35:40.881 [2024-12-16 16:41:29.229523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.229541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.238075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef2510 00:35:40.881 [2024-12-16 16:41:29.239312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.239330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.246622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef5be8 00:35:40.881 [2024-12-16 16:41:29.247508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.247526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.255874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016edfdc0 00:35:40.881 [2024-12-16 16:41:29.256570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.256589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.266525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef81e0 00:35:40.881 [2024-12-16 16:41:29.268023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.268040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.273047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016edf550 00:35:40.881 [2024-12-16 16:41:29.273668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.273687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.281698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efda78 00:35:40.881 [2024-12-16 16:41:29.282260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.282278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.291855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eed0b0 00:35:40.881 [2024-12-16 16:41:29.292638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.292657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.301252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eff3c8 00:35:40.881 [2024-12-16 16:41:29.302142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.302161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.310190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef0350 00:35:40.881 [2024-12-16 16:41:29.310739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.310758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.321170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef7100 00:35:40.881 [2024-12-16 16:41:29.322659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.322678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.327682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef0bc0 00:35:40.881 [2024-12-16 16:41:29.328294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.328313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.336875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef0ff8 00:35:40.881 [2024-12-16 16:41:29.337608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.337626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.347112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eed0b0 00:35:40.881 [2024-12-16 16:41:29.347995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.348013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.356281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee6fa8 00:35:40.881 [2024-12-16 16:41:29.357105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.357123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.364816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee4140 00:35:40.881 [2024-12-16 16:41:29.365693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.365718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.375165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef1430 00:35:40.881 [2024-12-16 16:41:29.376152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.376171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.384622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eebb98 00:35:40.881 [2024-12-16 16:41:29.385737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.385756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.393272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efc560 00:35:40.881 [2024-12-16 16:41:29.394381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.394399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.402856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee1b48 00:35:40.881 [2024-12-16 16:41:29.404077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.404099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.412154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efb8b8 00:35:40.881 [2024-12-16 16:41:29.413385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.413404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.419797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef8618 00:35:40.881 [2024-12-16 16:41:29.420258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.420277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.429317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eee5c8 00:35:40.881 [2024-12-16 16:41:29.429865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.429884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.439828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee5ec8 00:35:40.881 [2024-12-16 16:41:29.441197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.441215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.448348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016edece0 00:35:40.881 [2024-12-16 16:41:29.449337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.881 [2024-12-16 16:41:29.449355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:40.881 [2024-12-16 16:41:29.457576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef1868 00:35:40.882 [2024-12-16 16:41:29.458383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.882 [2024-12-16 16:41:29.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:40.882 [2024-12-16 16:41:29.466892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee5a90 00:35:40.882 [2024-12-16 16:41:29.468002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.882 [2024-12-16 16:41:29.468021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:40.882 [2024-12-16 16:41:29.475400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee4578 00:35:40.882 [2024-12-16 16:41:29.476501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.882 [2024-12-16 16:41:29.476519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:40.882 [2024-12-16 16:41:29.483967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efd208 00:35:40.882 [2024-12-16 16:41:29.484683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.882 [2024-12-16 16:41:29.484701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.495294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef9b30 00:35:41.141 [2024-12-16 16:41:29.496783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.496801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.503822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef0788 00:35:41.141 [2024-12-16 16:41:29.504941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.504959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.512160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee7818 00:35:41.141 [2024-12-16 16:41:29.513519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.513537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.520717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef8618 00:35:41.141 [2024-12-16 16:41:29.521468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.521486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.531026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee4140 00:35:41.141 [2024-12-16 16:41:29.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.532261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.539573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efa7d8 00:35:41.141 [2024-12-16 16:41:29.540365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.540383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.550718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eef6a8 00:35:41.141 [2024-12-16 16:41:29.552331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.552350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.557132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016edece0 00:35:41.141 [2024-12-16 16:41:29.557912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.568705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016eec840 00:35:41.141 [2024-12-16 16:41:29.570284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.570303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.575298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee23b8 00:35:41.141 [2024-12-16 16:41:29.576227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.576246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.586721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efe2e8 00:35:41.141 [2024-12-16 16:41:29.588155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.588173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.593351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef96f8 00:35:41.141 [2024-12-16 16:41:29.594042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.594060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.603996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ef8618 00:35:41.141 [2024-12-16 16:41:29.604902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.604923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.612435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016efb480 00:35:41.141 [2024-12-16 16:41:29.613238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.613256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.621903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.141 [2024-12-16 16:41:29.622225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.622244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.631249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.141 [2024-12-16 16:41:29.631376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.631394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.640659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.141 [2024-12-16 16:41:29.640788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.640805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.650047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.141 [2024-12-16 16:41:29.650204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.141 [2024-12-16 16:41:29.650222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.141 [2024-12-16 16:41:29.659606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.659737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.659755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.669201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.669333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.669351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.678906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.679037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.679055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.688543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.688685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.697987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.698114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.698131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.707353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.707481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.707498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.716837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.716986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.717003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.726301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.726460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.726476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.735739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.735868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.735885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.142 [2024-12-16 16:41:29.745220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.142 [2024-12-16 16:41:29.745358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.142 [2024-12-16 16:41:29.745375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.401 [2024-12-16 16:41:29.754788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.754917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.754934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.764179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.764330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.764357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.773571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.773700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.773718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.782933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.783064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.783081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.792337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.792483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.792500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.801747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.801895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.801912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.811172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.811336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.820568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.820696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.820713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.829907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.830034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.830068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.839543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.839692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.839709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.849078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.849232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.849253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.858623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.858749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.858766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.867990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.868143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.868160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.877380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.877507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.877524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.886723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.886851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.886868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.896071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.896225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.896242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.905478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.905607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.905623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.914820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.914950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.914967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.924215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.924344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.924359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.933609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.933740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.933756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.943074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.943233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.943250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.952464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.952607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.952624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.961853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.961999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.962017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.971283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.971411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.971428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.980615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.980742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.980758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.990023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.990176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.990194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.402 [2024-12-16 16:41:29.999398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.402 [2024-12-16 16:41:29.999523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.402 [2024-12-16 16:41:29.999540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.009813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.009951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.009972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.019522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.019653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.019672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.029736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.029869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.029891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.039286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.039422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.039441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.048868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.049000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.049018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.058835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.058966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.058984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 27398.00 IOPS, 107.02 MiB/s [2024-12-16T15:41:30.271Z] [2024-12-16 16:41:30.068216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.068365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.068383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.077609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.077754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.077773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.087221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.087352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.662 [2024-12-16 16:41:30.087370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.662 [2024-12-16 16:41:30.096826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.662 [2024-12-16 16:41:30.096955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.096976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.106663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.106794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.116256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.116403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.116423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.125860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.126010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.135409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.135556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.135574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.145042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.145180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.145198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.154631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.154761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.154779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.164209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.164366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.173778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.173911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.173930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.183353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.183506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.192951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.193083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.193108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.202520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.202655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.202674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.212092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.212229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.212247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.221674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.221806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.221824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.231234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.231364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.231386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.240807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.240939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.240958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.250581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.250710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.250729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.663 [2024-12-16 16:41:30.260133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.663 [2024-12-16 16:41:30.260263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.663 [2024-12-16 16:41:30.260281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.269703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.269835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.269855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.279404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.279536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.279554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.289000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.289139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.289157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.298568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.298700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.298718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.308144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.308276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.308294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.317722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.317852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.317868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.327292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.327423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.327441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.336858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.336985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.337003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.346445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.346577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.346598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.356072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.356226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.365640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.365771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.365789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.375444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.375590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.375609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.385081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.385235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.385255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.394697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.394829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.394848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.404290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.404421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.404442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.923 [2024-12-16 16:41:30.413850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.923 [2024-12-16 16:41:30.413982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.923 [2024-12-16 16:41:30.414002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.423433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.423583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.423602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.432993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.433138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.433157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.442596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.442727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.442745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.452224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.452355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.452373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.461804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.461936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.461954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.471396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.471524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.471542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.480997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.481131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.481149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.490577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.490708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.490727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.500149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.500280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.500300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.509731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.509858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.509877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.519325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.519455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.519473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:41.924 [2024-12-16 16:41:30.528888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:41.924 [2024-12-16 16:41:30.529016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.924 [2024-12-16 16:41:30.529035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.538471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.538602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.538621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.548060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.548197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.548215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.557655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.557787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.557805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.567225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.567358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.567376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.576836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.576968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.576987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.586431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.586562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.586582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.595942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.596069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.596087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.605272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.605400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.605433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.614852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.614981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.614999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.624428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.624560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.624577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.633794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.633919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.633936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.643242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.643397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.643414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.652599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.652726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.652742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.661991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.662129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.662146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.671577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.671707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.671723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.681136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.681280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.681301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.690539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.690684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.690702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.699979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.700111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.700128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.709325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.709452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.709469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.718673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.718802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.718818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.728002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.728136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.737319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.737446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.182 [2024-12-16 16:41:30.746666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.182 [2024-12-16 16:41:30.746792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.182 [2024-12-16 16:41:30.746808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.183 [2024-12-16 16:41:30.756001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.183 [2024-12-16 16:41:30.756134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.183 [2024-12-16 16:41:30.756151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.183 [2024-12-16 16:41:30.765354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.183 [2024-12-16 16:41:30.765487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.183 [2024-12-16 16:41:30.765504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.183 [2024-12-16 16:41:30.774721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.183 [2024-12-16 16:41:30.774846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.183 [2024-12-16 16:41:30.774863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.183 [2024-12-16 16:41:30.784069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.183 [2024-12-16 16:41:30.784201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.183 [2024-12-16 16:41:30.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.793668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.793799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.441 [2024-12-16 16:41:30.793816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.802990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.803130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.441 [2024-12-16 16:41:30.803147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.812349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.812476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.441 [2024-12-16 16:41:30.812492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.821648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.441 [2024-12-16 16:41:30.821794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.830992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.831129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.441 [2024-12-16 16:41:30.831146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.840312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.840467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.441 [2024-12-16 16:41:30.840484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.441 [2024-12-16 16:41:30.849695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.441 [2024-12-16 16:41:30.849821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.849838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.859062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.859200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.859233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.868673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.868819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.868836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.878160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.878307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.878324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.887555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.887702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.887720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.896983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.897114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.897130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.906342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.906469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.906485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.915680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.915805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.915822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.925000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.925135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.925155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.934331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.934457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.934474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.943648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.943777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.943794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.952978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.953108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.953125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.962346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.962474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.962491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.971687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.971814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.971831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.981026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.981161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.981178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.990374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.990500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.990518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:30.999697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:30.999826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:30.999843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:31.009155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:31.009293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:31.009310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:31.018766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:31.018901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:31.018918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:31.028163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:31.028294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:31.028311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:31.037491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:31.037618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:31.037635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.442 [2024-12-16 16:41:31.046971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.442 [2024-12-16 16:41:31.047110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.442 [2024-12-16 16:41:31.047128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.701 [2024-12-16 16:41:31.056501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.701 [2024-12-16 16:41:31.056629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.701 [2024-12-16 16:41:31.056646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.701 [2024-12-16 16:41:31.065881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13eddc0) with pdu=0x200016ee01f8 00:35:42.701 [2024-12-16 16:41:31.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.701 [2024-12-16 16:41:31.066026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:42.701 27154.00 IOPS, 106.07 MiB/s 00:35:42.701 Latency(us) 00:35:42.701 [2024-12-16T15:41:31.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.701 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:42.701 nvme0n1 : 2.01 27154.79 106.07 0.00 0.00 4705.74 2075.31 11297.16 00:35:42.701 [2024-12-16T15:41:31.310Z] =================================================================================================================== 00:35:42.701 [2024-12-16T15:41:31.310Z] Total : 27154.79 106.07 0.00 0.00 4705.74 2075.31 11297.16 00:35:42.701 { 00:35:42.701 "results": [ 00:35:42.701 { 00:35:42.701 "job": "nvme0n1", 00:35:42.701 "core_mask": "0x2", 00:35:42.701 "workload": "randwrite", 00:35:42.701 "status": "finished", 00:35:42.701 "queue_depth": 128, 00:35:42.701 "io_size": 4096, 00:35:42.701 "runtime": 2.005834, 00:35:42.701 "iops": 27154.789479089497, 00:35:42.701 "mibps": 106.07339640269335, 00:35:42.701 "io_failed": 0, 00:35:42.701 "io_timeout": 0, 00:35:42.701 "avg_latency_us": 4705.741173672965, 00:35:42.701 "min_latency_us": 2075.306666666667, 00:35:42.701 "max_latency_us": 11297.158095238095 00:35:42.701 } 00:35:42.701 ], 00:35:42.701 "core_count": 1 00:35:42.701 } 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:42.701 | .driver_specific 00:35:42.701 | .nvme_error 00:35:42.701 | .status_code 00:35:42.701 | .command_transient_transport_error' 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196931 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196931 ']' 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196931 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.701 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196931 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196931' 00:35:42.960 killing process with pid 1196931 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196931 00:35:42.960 Received shutdown signal, test time was about 2.000000 seconds 00:35:42.960 00:35:42.960 Latency(us) 00:35:42.960 [2024-12-16T15:41:31.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.960 [2024-12-16T15:41:31.569Z] =================================================================================================================== 00:35:42.960 [2024-12-16T15:41:31.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196931 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:42.960 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197598 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197598 /var/tmp/bperf.sock 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197598 ']' 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:42.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.961 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:42.961 [2024-12-16 16:41:31.547885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:42.961 [2024-12-16 16:41:31.547931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197598 ] 00:35:42.961 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:42.961 Zero copy mechanism will not be used. 00:35:43.219 [2024-12-16 16:41:31.622851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.220 [2024-12-16 16:41:31.645565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.220 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.220 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:43.220 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:43.220 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:43.478 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:43.478 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.478 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.478 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.478 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.478 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.737 nvme0n1 00:35:43.737 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:43.737 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.737 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.997 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.997 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:43.997 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:43.997 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:43.997 Zero copy mechanism will not be used. 00:35:43.997 Running I/O for 2 seconds... 00:35:43.997 [2024-12-16 16:41:32.435434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.435507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.435539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.441304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.441369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.441391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.445827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.445882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.445903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.450330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.450401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.450421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.454959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.455025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.455043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.459349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.459404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.459422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.463725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.463792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.463811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.468074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.468171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.468188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.472508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.472569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.472586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.477063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.477143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.477165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.997 [2024-12-16 16:41:32.482004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.997 [2024-12-16 16:41:32.482068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.997 [2024-12-16 16:41:32.482086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.486342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.486397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.486415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.491132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.491196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.491213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.496014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.496153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.496172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.501519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.501578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.501596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.506521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.506572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.506590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.511165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.511255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.511273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.515982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.516087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.516113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.520893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.520973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.520992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.525484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.525550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.525568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.529846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.529918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.529935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.534129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.534194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.534213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.538457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.538533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.538551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.542744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.542829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.542848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.547577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.547683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.547700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.551990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.552048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.552066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.556671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.556757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.561367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.561423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.561441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.565654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.565710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.565728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.569896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.569969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.569987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.574120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.574180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.574199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.578555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.578655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.578673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.584144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.584310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.584328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.590259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.590436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.590453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.596644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.596820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.596838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.998 [2024-12-16 16:41:32.603488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:43.998 [2024-12-16 16:41:32.603607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.998 [2024-12-16 16:41:32.603629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.609484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.609816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.609837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.615571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.615877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.615897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.621492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.621852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.621873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.627463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.627810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.627829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.633539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.633876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.633895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.639711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.640030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.640049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.646294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.646639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.646659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.652974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.653234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.653255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.659451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.659689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.659709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.665892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.666137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.672215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.672455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.672475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.678410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.678646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.678665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.683429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.683667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.683687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.688431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.688683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.688709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.693526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.693767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.259 [2024-12-16 16:41:32.693787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.259 [2024-12-16 16:41:32.698920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.259 [2024-12-16 16:41:32.699189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.699209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.704866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.705140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.705163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.710307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.710551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.710571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.715668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.715928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.715948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.721617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.721852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.721872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.726996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.727237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.727258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.732421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.732671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.732691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.737994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.738235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.738255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.743279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.743513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.743532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.747955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.748196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.748216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.752457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.752703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.752727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.756841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.757093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.757120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.761134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.761372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.761391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.765411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.765647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.765667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.769832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.770070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.770090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.774315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.774551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.774570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.778627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.778861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.778880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.783058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.783297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.783317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.787296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.787534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.787553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.791683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.791928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.791948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.795943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.796185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.796205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.800153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.800388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.800407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.804385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.804618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.804638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.808605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.808844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.808864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.812755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.812991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.813011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.816909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.817148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.817168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.821129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.821367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.821387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.825421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.260 [2024-12-16 16:41:32.825669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.260 [2024-12-16 16:41:32.825691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.260 [2024-12-16 16:41:32.829678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.829916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.829936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.833845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.834081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.834108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.838065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.838319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.842515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.842773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.842793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.846975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.847217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.851306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.851544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.851563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.855609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.855858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.855878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.860068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.860344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.860364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.261 [2024-12-16 16:41:32.864529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.261 [2024-12-16 16:41:32.864783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.261 [2024-12-16 16:41:32.864807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.868889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.869132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.521 [2024-12-16 16:41:32.869152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.873430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.873684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.521 [2024-12-16 16:41:32.873704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.878128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.878366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.521 [2024-12-16 16:41:32.878386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.882751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.882988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.521 [2024-12-16 16:41:32.883008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.887854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.888090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.521 [2024-12-16 16:41:32.888115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.892667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.892727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.521 [2024-12-16 16:41:32.892744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.521 [2024-12-16 16:41:32.897477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.521 [2024-12-16 16:41:32.897712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.897732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.902077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.902334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.906718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.906968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.906988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.911499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.911741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.911761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.916218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.916457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.916477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.920548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.920786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.920805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.924845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.925080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.925105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.929136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.929375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.929395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.933598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.933836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.933856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.937869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.938110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.938129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.942547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.942813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.942838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.947496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.947738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.947758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.952486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.952726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.952746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.957934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.958206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.958226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.962915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.963174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.963194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.968083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.968355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.968375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.972845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.973113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.973133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.977534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.977782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.977802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.982294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.982543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.982563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.986992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.987235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.987258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.991459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.991692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.991712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:32.995722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:32.995961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:32.995981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.000007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.000265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:33.000286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.004510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.004744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:33.004765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.009319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.009574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:33.009595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.014476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.014717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:33.014736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.019374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.019615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:33.019637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.024832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.025075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.522 [2024-12-16 16:41:33.025102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.522 [2024-12-16 16:41:33.030136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.522 [2024-12-16 16:41:33.030387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.030409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.035214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.035470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.035491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.040434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.040670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.040690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.045781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.046016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.046036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.051053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.051293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.051313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.055882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.056122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.056141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.060547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.060781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.060801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.065581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.065816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.065836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.072030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.072270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.072295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.078941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.079184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.079203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.085768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.086003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.086023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.093242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.093478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.093497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.100618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.100846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.100866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.108419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.108684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.108704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.115658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.115894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.115914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.523 [2024-12-16 16:41:33.122919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.523 [2024-12-16 16:41:33.123157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.523 [2024-12-16 16:41:33.123177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.130949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.131202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.131222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.138365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.138710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.146017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.146260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.146280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.153318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.160986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.161232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.161253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.168503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.168742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.168762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.175793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.176031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.176051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.181739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.181977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.181998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.186173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.186410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.186429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.190551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.190788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.190808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.195025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.195278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.195297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.199472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.199715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.199736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.204045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.204314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.204334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.208422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.208676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.208695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.212774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.213030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.213050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.217272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.217523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.217543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.221708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.221977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.221997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.226126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.226389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.226409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.230416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.230652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.230675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.234721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.234973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.234993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.239647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.239891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.239910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.245766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.246003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.246023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.250710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.250930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.250950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.255713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.255933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.255952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.260497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.784 [2024-12-16 16:41:33.260718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.784 [2024-12-16 16:41:33.260738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.784 [2024-12-16 16:41:33.264861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.265105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.265125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.269984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.270284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.270304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.275944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.276162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.276187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.280864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.281102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.281122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.285573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.285798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.285818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.290399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.290617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.290637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.295541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.295762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.295782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.300889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.301104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.305791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.306087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.306113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.311748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.312028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.312048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.317595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.317866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.317886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.323903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.324193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.324213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.330042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.330340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.330359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.336493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.336815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.336834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.342603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.342863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.342884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.349907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.350215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.350235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.355761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.355968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.355989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.360850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.361063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.361082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.365634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.365844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.365864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.369761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.369966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.369990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.373837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.374070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.378128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.378339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.378358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.382175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.382385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.382405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.785 [2024-12-16 16:41:33.386256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:44.785 [2024-12-16 16:41:33.386467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.785 [2024-12-16 16:41:33.386486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.390358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.390590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.394443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.394658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.394678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.398513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.398723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.398743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.402534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.402740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.402759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.406530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.406738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.410711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.410918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.410937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.415198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.415406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.415426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.419438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.419647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.419666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.423694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.423906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.423926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.428048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.428260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.428280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.432954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.433170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.433190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.046 6126.00 IOPS, 765.75 MiB/s [2024-12-16T15:41:33.655Z] [2024-12-16 16:41:33.438724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.438933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.438952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.443282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.443497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.443516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.447882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.448136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.453038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.453265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.046 [2024-12-16 16:41:33.453285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.046 [2024-12-16 16:41:33.458109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.046 [2024-12-16 16:41:33.458324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.458344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.463223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.463449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.463469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.468235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.468464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.468483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.473156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.473385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.473405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.477835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.478061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.478080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.482926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.483160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.483179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.487866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.488075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.488104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.492811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.493036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.493056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.497626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.497835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.497856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.502274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.502482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.502503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.507121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.507334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.507354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.512005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.512220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.512240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.517236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.517444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.517464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.522173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.522381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.522401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.526842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.527050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.527069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.531589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.531802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.531827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.536334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.536541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.536561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.541576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.541785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.541804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.546388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.546597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.546617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.551163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.551370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.551390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.555828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.556036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.556056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.560632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.560842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.565235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.565445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.565464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.570482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.570690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.570709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.574888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.575106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.575125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.579318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.579527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.579546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.584412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.584622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.584648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.589492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.047 [2024-12-16 16:41:33.589700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.047 [2024-12-16 16:41:33.589719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.047 [2024-12-16 16:41:33.594470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.594676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.594696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.600620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.600830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.600849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.605262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.605474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.605493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.609647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.609856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.609875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.613895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.614108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.614130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.618033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.618249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.618269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.622420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.622639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.622658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.626758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.626964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.626984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.631065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.631281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.631301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.635449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.635660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.635679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.639850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.640078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.640103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.644237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.644472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.644491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.048 [2024-12-16 16:41:33.648743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.048 [2024-12-16 16:41:33.648956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.048 [2024-12-16 16:41:33.648975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.652924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.653161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.653181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.657178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.657389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.657409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.661467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.661677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.661696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.666223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.666448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.666468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.671148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.671363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.671386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.676025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.676241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.676261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.680474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.680683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.680703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.685651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.685859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.685878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.690354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.690562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.690583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.695017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.695232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.695251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.699724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.699951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.699970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.704557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.704771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.704791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.709036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.709254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.709273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.713440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.713664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.713685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.718517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.718745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.718765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.723561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.723785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.723805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.728042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.728272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.728293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.732500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.732727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.732750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.736655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.736862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.736881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.740994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.741232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.741251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.745447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.745656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.745676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.749844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.750071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.750090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.754327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.754554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.754575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.758741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.758973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.758992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.762894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.763111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.763130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.767246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.767457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.309 [2024-12-16 16:41:33.767476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.309 [2024-12-16 16:41:33.771740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.309 [2024-12-16 16:41:33.771954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.771973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.776628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.776855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.781379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.781586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.781605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.786494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.786701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.786720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.791685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.791895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.791914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.796205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.796412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.800559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.800766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.800785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.804754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.804963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.804982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.808913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.809126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.809145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.813090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.813311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.813330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.817256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.817466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.817489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.821431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.821639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.821658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.825749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.825959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.825979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.830382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.830588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.830613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.835056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.835267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.835287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.840017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.840246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.840266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.845215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.845424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.845443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.850471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.850682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.850705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.854976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.855192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.855212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.859444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.859650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.859676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.863669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.863883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.863903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.868085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.868314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.868334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.872583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.872809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.872829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.877153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.877363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.877382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.881659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.881868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.881888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.886244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.886453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.886472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.890742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.890956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.890976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.895233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.895443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.310 [2024-12-16 16:41:33.895462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.310 [2024-12-16 16:41:33.899703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.310 [2024-12-16 16:41:33.899912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.311 [2024-12-16 16:41:33.899931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.311 [2024-12-16 16:41:33.904257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.311 [2024-12-16 16:41:33.904465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.311 [2024-12-16 16:41:33.904485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.311 [2024-12-16 16:41:33.908816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.311 [2024-12-16 16:41:33.909024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.311 [2024-12-16 16:41:33.909043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.311 [2024-12-16 16:41:33.913308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.311 [2024-12-16 16:41:33.913521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.311 [2024-12-16 16:41:33.913540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.917772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.917981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.918001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.921974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.922192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.922212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.926249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.926481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.926501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.930604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.930812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.930832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.935135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.935343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.935362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.940164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.940389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.940408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.944974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.945187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.945207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.949439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.949648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.949667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.953835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.954060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.954079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.958346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.958558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.962672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.962884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.962904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.967193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.967409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.967435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.972361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.972586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.972605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.977217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.977441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.977461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.982040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.982271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.982291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.987080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.987326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.987347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.992404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.992615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.571 [2024-12-16 16:41:33.992634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.571 [2024-12-16 16:41:33.997063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.571 [2024-12-16 16:41:33.997282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:33.997302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.001877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.002083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.002110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.006225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.006433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.006453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.010784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.010999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.011019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.015367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.015576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.015595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.019446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.019656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.019676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.023570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.023778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.023797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.027634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.027844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.027863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.031690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.031900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.031919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.035747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.035958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.035977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.039807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.040017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.040036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.044598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.044884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.044904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.050444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.050712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.050732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.056036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.056275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.056295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.060938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.061154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.061173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.065386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.065596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.065616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.069906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.070123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.070143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.074663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.074976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.080563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.080897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.080917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.086143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.086220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.091947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.092088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.092117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.098062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.098236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.098253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.104294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.104422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.104440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.111194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.111327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.111345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.118070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.118220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.118238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.125377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.125535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.125553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.132456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.132597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.132614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.140126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.572 [2024-12-16 16:41:34.140223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.572 [2024-12-16 16:41:34.140240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.572 [2024-12-16 16:41:34.148016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.573 [2024-12-16 16:41:34.148166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.573 [2024-12-16 16:41:34.148183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.573 [2024-12-16 16:41:34.155896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.573 [2024-12-16 16:41:34.156005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.573 [2024-12-16 16:41:34.156022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.573 [2024-12-16 16:41:34.163652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.573 [2024-12-16 16:41:34.163828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.573 [2024-12-16 16:41:34.163845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.573 [2024-12-16 16:41:34.171105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.573 [2024-12-16 16:41:34.171254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.573 [2024-12-16 16:41:34.171272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.178457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.178633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.186008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.186148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.186165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.193856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.194001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.194019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.202081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.202224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.202242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.209772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.209913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.209947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.217332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.217466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.217484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.225058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.225248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.225267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.232080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.232189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.232207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.238899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.238971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.238990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.245893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.246074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.246092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.253473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.253671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.253690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.260246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.260359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.260377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.266765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.266950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.266968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.271921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.272035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.272053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.276129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.276183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.276204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.280195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.280251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.280269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.833 [2024-12-16 16:41:34.284469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.833 [2024-12-16 16:41:34.284555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.833 [2024-12-16 16:41:34.284572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.289664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.289833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.289851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.295802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.295999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.296016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.301060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.301222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.301240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.308012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.308182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.308200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.314897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.315053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.315071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.321963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.322148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.322166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.330126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.330287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.330305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.337771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.337960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.337977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.344971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.345128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.345146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.352598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.352751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.352769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.359816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.359994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.360012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.367607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.367787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.367804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.375344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.375416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.375434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.383151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.383321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.383338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.391022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.391212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.391230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.398269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.398340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.398357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.404278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.404335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.404352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.410177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.410268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.410285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.417355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.417436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.423483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.423578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.423596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.429325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.429424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.429442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.834 [2024-12-16 16:41:34.435366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:45.834 [2024-12-16 16:41:34.435479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.834 [2024-12-16 16:41:34.435497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:46.094 6022.50 IOPS, 752.81 MiB/s [2024-12-16T15:41:34.703Z] [2024-12-16 16:41:34.442513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ee2a0) with pdu=0x200016eff3c8 00:35:46.094 [2024-12-16 16:41:34.442607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.094 [2024-12-16 16:41:34.442624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:46.094 00:35:46.094 Latency(us) 00:35:46.094 [2024-12-16T15:41:34.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:46.094 nvme0n1 : 2.00 6016.50 752.06 0.00 0.00 2654.36 1888.06 8301.23 00:35:46.094 [2024-12-16T15:41:34.703Z] =================================================================================================================== 00:35:46.094 [2024-12-16T15:41:34.703Z] Total : 6016.50 752.06 0.00 0.00 2654.36 1888.06 8301.23 00:35:46.094 { 00:35:46.094 "results": [ 00:35:46.094 { 00:35:46.094 "job": "nvme0n1", 00:35:46.094 "core_mask": "0x2", 00:35:46.094 "workload": "randwrite", 00:35:46.094 "status": "finished", 00:35:46.094 "queue_depth": 16, 00:35:46.094 "io_size": 131072, 00:35:46.094 "runtime": 2.004654, 00:35:46.094 "iops": 6016.499605418192, 00:35:46.094 "mibps": 752.062450677274, 00:35:46.094 "io_failed": 0, 00:35:46.094 "io_timeout": 0, 00:35:46.094 "avg_latency_us": 2654.3585914458645, 00:35:46.094 "min_latency_us": 1888.0609523809524, 00:35:46.094 "max_latency_us": 8301.226666666667 00:35:46.094 } 00:35:46.094 ], 00:35:46.094 "core_count": 1 00:35:46.094 } 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:46.094 | .driver_specific 00:35:46.094 | .nvme_error 00:35:46.094 | .status_code 00:35:46.094 | .command_transient_transport_error' 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 390 > 0 )) 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197598 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197598 ']' 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197598 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.094 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197598 00:35:46.353 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:46.353 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:46.353 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197598' 00:35:46.353 killing process with pid 1197598 00:35:46.353 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197598 00:35:46.353 Received shutdown signal, test time was about 2.000000 seconds 00:35:46.353 00:35:46.353 Latency(us) 00:35:46.354 [2024-12-16T15:41:34.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.354 [2024-12-16T15:41:34.963Z] =================================================================================================================== 00:35:46.354 [2024-12-16T15:41:34.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197598 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1195876 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1195876 ']' 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1195876 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195876 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195876' 00:35:46.354 killing process with pid 1195876 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1195876 00:35:46.354 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1195876 00:35:46.620 00:35:46.620 real 0m13.872s 00:35:46.620 user 0m26.671s 00:35:46.620 sys 0m4.456s 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:46.620 ************************************ 00:35:46.620 END TEST nvmf_digest_error 00:35:46.620 ************************************ 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.620 rmmod nvme_tcp 00:35:46.620 rmmod nvme_fabrics 00:35:46.620 rmmod nvme_keyring 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:46.620 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1195876 ']' 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1195876 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1195876 ']' 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1195876 00:35:46.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1195876) - No such process 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1195876 is not found' 00:35:46.621 Process with pid 1195876 is not found 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.621 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.157 16:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:49.157 00:35:49.157 real 0m36.153s 00:35:49.157 user 0m55.279s 00:35:49.157 sys 0m13.485s 00:35:49.157 16:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.157 16:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.157 ************************************ 00:35:49.157 END TEST nvmf_digest 00:35:49.157 ************************************ 00:35:49.157 16:41:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:49.157 16:41:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:49.157 16:41:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.158 ************************************ 00:35:49.158 START TEST nvmf_bdevperf 00:35:49.158 ************************************ 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:49.158 * Looking for test storage... 00:35:49.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:49.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.158 --rc genhtml_branch_coverage=1 00:35:49.158 --rc genhtml_function_coverage=1 00:35:49.158 --rc genhtml_legend=1 00:35:49.158 --rc geninfo_all_blocks=1 00:35:49.158 --rc geninfo_unexecuted_blocks=1 00:35:49.158 00:35:49.158 ' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:49.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.158 --rc genhtml_branch_coverage=1 00:35:49.158 --rc genhtml_function_coverage=1 00:35:49.158 --rc genhtml_legend=1 00:35:49.158 --rc geninfo_all_blocks=1 00:35:49.158 --rc geninfo_unexecuted_blocks=1 00:35:49.158 00:35:49.158 ' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:49.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.158 --rc genhtml_branch_coverage=1 00:35:49.158 --rc genhtml_function_coverage=1 00:35:49.158 --rc genhtml_legend=1 00:35:49.158 --rc geninfo_all_blocks=1 00:35:49.158 --rc geninfo_unexecuted_blocks=1 00:35:49.158 00:35:49.158 ' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:49.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.158 --rc genhtml_branch_coverage=1 00:35:49.158 --rc genhtml_function_coverage=1 00:35:49.158 --rc genhtml_legend=1 00:35:49.158 --rc geninfo_all_blocks=1 00:35:49.158 --rc geninfo_unexecuted_blocks=1 00:35:49.158 00:35:49.158 ' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.158 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.159 16:41:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:55.731 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:55.731 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:55.731 Found net devices under 0000:af:00.0: cvl_0_0 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:55.731 Found net devices under 0000:af:00.1: cvl_0_1 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:55.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:55.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:35:55.731 00:35:55.731 --- 10.0.0.2 ping statistics --- 00:35:55.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.731 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:35:55.731 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:55.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:55.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:35:55.732 00:35:55.732 --- 10.0.0.1 ping statistics --- 00:35:55.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.732 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1201556 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1201556 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1201556 ']' 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 [2024-12-16 16:41:43.415640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:55.732 [2024-12-16 16:41:43.415682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:55.732 [2024-12-16 16:41:43.490621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:55.732 [2024-12-16 16:41:43.513310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:55.732 [2024-12-16 16:41:43.513348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:55.732 [2024-12-16 16:41:43.513355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:55.732 [2024-12-16 16:41:43.513361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:55.732 [2024-12-16 16:41:43.513367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:55.732 [2024-12-16 16:41:43.514697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:55.732 [2024-12-16 16:41:43.514805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.732 [2024-12-16 16:41:43.514806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 [2024-12-16 16:41:43.653631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 Malloc0 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 [2024-12-16 16:41:43.729079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.732 { 00:35:55.732 "params": { 00:35:55.732 "name": "Nvme$subsystem", 00:35:55.732 "trtype": "$TEST_TRANSPORT", 00:35:55.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.732 "adrfam": "ipv4", 00:35:55.732 "trsvcid": "$NVMF_PORT", 00:35:55.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.732 "hdgst": ${hdgst:-false}, 00:35:55.732 "ddgst": ${ddgst:-false} 00:35:55.732 }, 00:35:55.732 "method": "bdev_nvme_attach_controller" 00:35:55.732 } 00:35:55.732 EOF 00:35:55.732 )") 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:55.732 16:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:55.732 "params": { 00:35:55.732 "name": "Nvme1", 00:35:55.732 "trtype": "tcp", 00:35:55.732 "traddr": "10.0.0.2", 00:35:55.732 "adrfam": "ipv4", 00:35:55.732 "trsvcid": "4420", 00:35:55.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:55.732 "hdgst": false, 00:35:55.732 "ddgst": false 00:35:55.732 }, 00:35:55.732 "method": "bdev_nvme_attach_controller" 00:35:55.732 }' 00:35:55.732 [2024-12-16 16:41:43.779406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:55.732 [2024-12-16 16:41:43.779449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201580 ] 00:35:55.732 [2024-12-16 16:41:43.853985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.732 [2024-12-16 16:41:43.877296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.732 Running I/O for 1 seconds... 00:35:56.674 11230.00 IOPS, 43.87 MiB/s 00:35:56.674 Latency(us) 00:35:56.674 [2024-12-16T15:41:45.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.674 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:56.674 Verification LBA range: start 0x0 length 0x4000 00:35:56.674 Nvme1n1 : 1.04 10872.31 42.47 0.00 0.00 11277.88 1693.01 40944.40 00:35:56.674 [2024-12-16T15:41:45.283Z] =================================================================================================================== 00:35:56.674 [2024-12-16T15:41:45.283Z] Total : 10872.31 42.47 0.00 0.00 11277.88 1693.01 40944.40 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1201818 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:56.932 { 00:35:56.932 "params": { 00:35:56.932 "name": "Nvme$subsystem", 00:35:56.932 "trtype": "$TEST_TRANSPORT", 00:35:56.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.932 "adrfam": "ipv4", 00:35:56.932 "trsvcid": "$NVMF_PORT", 00:35:56.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.932 "hdgst": ${hdgst:-false}, 00:35:56.932 "ddgst": ${ddgst:-false} 00:35:56.932 }, 00:35:56.932 "method": "bdev_nvme_attach_controller" 00:35:56.932 } 00:35:56.932 EOF 00:35:56.932 )") 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:56.932 16:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:56.932 "params": { 00:35:56.932 "name": "Nvme1", 00:35:56.932 "trtype": "tcp", 00:35:56.932 "traddr": "10.0.0.2", 00:35:56.932 "adrfam": "ipv4", 00:35:56.932 "trsvcid": "4420", 00:35:56.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:56.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:56.932 "hdgst": false, 00:35:56.932 "ddgst": false 00:35:56.932 }, 00:35:56.932 "method": "bdev_nvme_attach_controller" 00:35:56.932 }' 00:35:56.932 [2024-12-16 16:41:45.435917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:56.932 [2024-12-16 16:41:45.435963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201818 ] 00:35:56.932 [2024-12-16 16:41:45.509935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.932 [2024-12-16 16:41:45.531859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.188 Running I/O for 15 seconds... 00:35:59.493 11307.00 IOPS, 44.17 MiB/s [2024-12-16T15:41:48.671Z] 11274.50 IOPS, 44.04 MiB/s [2024-12-16T15:41:48.671Z] 16:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1201556 00:36:00.062 16:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:00.062 [2024-12-16 16:41:48.412472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.062 [2024-12-16 16:41:48.412508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.062 [2024-12-16 16:41:48.412523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.062 [2024-12-16 16:41:48.412531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.062 [2024-12-16 16:41:48.412539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.062 [2024-12-16 16:41:48.412546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.062 [2024-12-16 16:41:48.412556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.062 [2024-12-16 16:41:48.412567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.062 [2024-12-16 16:41:48.412577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.062 [2024-12-16 16:41:48.412586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.412987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.412998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.063 [2024-12-16 16:41:48.413133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.063 [2024-12-16 16:41:48.413298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.063 [2024-12-16 16:41:48.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.064 [2024-12-16 16:41:48.413909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.064 [2024-12-16 16:41:48.413917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.413923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.413931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.413937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.413945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.413951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.413959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.413965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.413973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.413979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.413987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.413993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.065 [2024-12-16 16:41:48.414065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.065 [2024-12-16 16:41:48.414079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.065 [2024-12-16 16:41:48.414500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.065 [2024-12-16 16:41:48.414507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.066 [2024-12-16 16:41:48.414514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.066 [2024-12-16 16:41:48.414530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f1920 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.414546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:00.066 [2024-12-16 16:41:48.414551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:00.066 [2024-12-16 16:41:48.414556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110392 len:8 PRP1 0x0 PRP2 0x0 00:36:00.066 [2024-12-16 16:41:48.414562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.066 [2024-12-16 16:41:48.414649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.066 [2024-12-16 16:41:48.414663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.066 [2024-12-16 16:41:48.414676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.066 [2024-12-16 16:41:48.414691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.066 [2024-12-16 16:41:48.414698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.417500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.417527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.418051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.418066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.418074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.418253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.418428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.418436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.418445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.418453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.430758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.431144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.431193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.431217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.431800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.432186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.432195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.432202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.432209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.443566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.443869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.443886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.443893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.444061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.444236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.444246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.444253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.444260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.456391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.456818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.456835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.456842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.457010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.457185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.457193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.457199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.457205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.469403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.469736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.469752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.469759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.469927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.470102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.470110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.470116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.470122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.482262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.482598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.482614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.482624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.482792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.482959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.482966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.482972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.482978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.495101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.495400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.495416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.495423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.495590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.495757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.495765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.495771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.495777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.066 [2024-12-16 16:41:48.507977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.066 [2024-12-16 16:41:48.508341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.066 [2024-12-16 16:41:48.508385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.066 [2024-12-16 16:41:48.508409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.066 [2024-12-16 16:41:48.508992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.066 [2024-12-16 16:41:48.509263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.066 [2024-12-16 16:41:48.509272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.066 [2024-12-16 16:41:48.509278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.066 [2024-12-16 16:41:48.509284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.520974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.521322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.521339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.521346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.521513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.521684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.521692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.521698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.521704] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.533889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.534275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.534321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.534344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.534928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.535211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.535220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.535226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.535232] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.546822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.547194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.547211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.547218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.547386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.547554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.547563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.547569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.547575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.559750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.560121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.560138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.560145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.560312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.560479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.560486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.560496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.560502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.572658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.573007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.573023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.573030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.573203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.573372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.573380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.573386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.573391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.585553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.585924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.585941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.585948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.586122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.586291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.586299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.586305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.586311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.598521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.598851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.598884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.598909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.599451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.599619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.599628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.599634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.599640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.611365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.611641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.611657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.611664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.611831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.067 [2024-12-16 16:41:48.612000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.067 [2024-12-16 16:41:48.612007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.067 [2024-12-16 16:41:48.612014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.067 [2024-12-16 16:41:48.612020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.067 [2024-12-16 16:41:48.624344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.067 [2024-12-16 16:41:48.624627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.067 [2024-12-16 16:41:48.624642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.067 [2024-12-16 16:41:48.624649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.067 [2024-12-16 16:41:48.624807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.068 [2024-12-16 16:41:48.624965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.068 [2024-12-16 16:41:48.624973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.068 [2024-12-16 16:41:48.624979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.068 [2024-12-16 16:41:48.624985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.068 [2024-12-16 16:41:48.637220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.068 [2024-12-16 16:41:48.637482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.068 [2024-12-16 16:41:48.637498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.068 [2024-12-16 16:41:48.637505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.068 [2024-12-16 16:41:48.637673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.068 [2024-12-16 16:41:48.637840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.068 [2024-12-16 16:41:48.637848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.068 [2024-12-16 16:41:48.637854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.068 [2024-12-16 16:41:48.637859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.068 [2024-12-16 16:41:48.650039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.068 [2024-12-16 16:41:48.650383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.068 [2024-12-16 16:41:48.650400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.068 [2024-12-16 16:41:48.650410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.068 [2024-12-16 16:41:48.650578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.068 [2024-12-16 16:41:48.650745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.068 [2024-12-16 16:41:48.650753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.068 [2024-12-16 16:41:48.650759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.068 [2024-12-16 16:41:48.650765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.068 [2024-12-16 16:41:48.663098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.068 [2024-12-16 16:41:48.663515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.068 [2024-12-16 16:41:48.663532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.068 [2024-12-16 16:41:48.663540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.068 [2024-12-16 16:41:48.663713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.068 [2024-12-16 16:41:48.663886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.068 [2024-12-16 16:41:48.663895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.068 [2024-12-16 16:41:48.663902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.068 [2024-12-16 16:41:48.663907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.327 [2024-12-16 16:41:48.676154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.327 [2024-12-16 16:41:48.676565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.327 [2024-12-16 16:41:48.676581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.327 [2024-12-16 16:41:48.676588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.327 [2024-12-16 16:41:48.676761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.327 [2024-12-16 16:41:48.676933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.327 [2024-12-16 16:41:48.676941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.327 [2024-12-16 16:41:48.676947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.327 [2024-12-16 16:41:48.676953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.689228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.689675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.689722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.689745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.690330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.690508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.690516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.690522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.690529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.702144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.702574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.702590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.702597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.702765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.702933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.702940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.702946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.702952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.714926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.715341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.715358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.715365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.715533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.715700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.715708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.715714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.715719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.727768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.728198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.728244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.728266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.728859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.729018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.729026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.729037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.729043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.740543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.740979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.741024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.741047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.741567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.741736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.741744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.741750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.741755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 10021.33 IOPS, 39.15 MiB/s [2024-12-16T15:41:48.937Z] [2024-12-16 16:41:48.753301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.753724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.753740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.753747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.753906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.754064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.754072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.754078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.754083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.766113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.766461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.766476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.766483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.766642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.766800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.766807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.766813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.766819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.778900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.779312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.779328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.779336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.779503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.779671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.779678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.779684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.779690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.791745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.792107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.792152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.792174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.792655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.792815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.792822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.792828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.792833] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.804574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.804982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.804998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.805004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.805188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.805356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.805363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.805369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.805375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.817448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.817860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.817878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.817885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.818044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.818228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.818237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.818243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.818249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.830218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.830648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.830693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.830717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.328 [2024-12-16 16:41:48.831226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.328 [2024-12-16 16:41:48.831395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.328 [2024-12-16 16:41:48.831402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.328 [2024-12-16 16:41:48.831408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.328 [2024-12-16 16:41:48.831414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.328 [2024-12-16 16:41:48.843012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.328 [2024-12-16 16:41:48.843451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.328 [2024-12-16 16:41:48.843468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.328 [2024-12-16 16:41:48.843475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.843643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.843811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.843818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.843824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.843830] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.855816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.856203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.856220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.856226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.856388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.856547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.856555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.856560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.856566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.868654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.869083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.869138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.869161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.869743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.870197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.870205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.870211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.870217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.881485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.881900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.881915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.881922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.882081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.882269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.882277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.882284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.882289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.894326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.894723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.894738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.894745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.894904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.895063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.895070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.895079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.895085] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.907177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.907504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.907520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.907527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.907685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.907844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.907851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.907856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.907862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.919913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.920370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.920386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.920394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.920567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.920739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.920747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.920754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.920760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.329 [2024-12-16 16:41:48.932970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.329 [2024-12-16 16:41:48.933399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.329 [2024-12-16 16:41:48.933415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.329 [2024-12-16 16:41:48.933422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.329 [2024-12-16 16:41:48.933595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.329 [2024-12-16 16:41:48.933768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.329 [2024-12-16 16:41:48.933776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.329 [2024-12-16 16:41:48.933782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.329 [2024-12-16 16:41:48.933788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.589 [2024-12-16 16:41:48.945866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.589 [2024-12-16 16:41:48.946326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.589 [2024-12-16 16:41:48.946372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.589 [2024-12-16 16:41:48.946396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.589 [2024-12-16 16:41:48.946981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.589 [2024-12-16 16:41:48.947416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.589 [2024-12-16 16:41:48.947424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.589 [2024-12-16 16:41:48.947431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.589 [2024-12-16 16:41:48.947437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.589 [2024-12-16 16:41:48.958687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.589 [2024-12-16 16:41:48.959102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.589 [2024-12-16 16:41:48.959118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.589 [2024-12-16 16:41:48.959125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.589 [2024-12-16 16:41:48.959285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.589 [2024-12-16 16:41:48.959443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.589 [2024-12-16 16:41:48.959451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.589 [2024-12-16 16:41:48.959456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.589 [2024-12-16 16:41:48.959462] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.589 [2024-12-16 16:41:48.971505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.589 [2024-12-16 16:41:48.971938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.589 [2024-12-16 16:41:48.971983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.589 [2024-12-16 16:41:48.972006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.589 [2024-12-16 16:41:48.972420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.589 [2024-12-16 16:41:48.972589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.589 [2024-12-16 16:41:48.972596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.589 [2024-12-16 16:41:48.972603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.589 [2024-12-16 16:41:48.972608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.589 [2024-12-16 16:41:48.984283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.589 [2024-12-16 16:41:48.984692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.589 [2024-12-16 16:41:48.984711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.589 [2024-12-16 16:41:48.984717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.589 [2024-12-16 16:41:48.984876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.589 [2024-12-16 16:41:48.985035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.589 [2024-12-16 16:41:48.985042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.589 [2024-12-16 16:41:48.985048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.589 [2024-12-16 16:41:48.985053] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.589 [2024-12-16 16:41:48.997144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.589 [2024-12-16 16:41:48.997556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.589 [2024-12-16 16:41:48.997572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:48.997578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:48.997737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:48.997895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:48.997902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:48.997908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:48.997914] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.009961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.010392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.010408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.010415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.010583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.010751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.010759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.010765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.010770] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.022860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.023216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.023222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.023384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.023543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.023550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.023556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.023561] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.035704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.036117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.036133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.036140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.036298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.036457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.036465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.036471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.036476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.048551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.048882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.048897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.048904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.049072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.049245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.049254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.049260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.049266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.061475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.061872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.061888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.061895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.062062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.062235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.062243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.062254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.062260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.074277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.074696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.074712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.074719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.074887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.075055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.075062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.075068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.075074] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.087003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.087433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.087449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.087457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.087625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.087792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.087800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.087805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.087811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.099762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.100143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.100188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.100211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.100677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.100847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.100855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.100862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.100869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.112553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.112899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.112915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.112922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.590 [2024-12-16 16:41:49.113089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.590 [2024-12-16 16:41:49.113266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.590 [2024-12-16 16:41:49.113275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.590 [2024-12-16 16:41:49.113281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.590 [2024-12-16 16:41:49.113287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.590 [2024-12-16 16:41:49.125563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.590 [2024-12-16 16:41:49.125997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.590 [2024-12-16 16:41:49.126013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.590 [2024-12-16 16:41:49.126020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.591 [2024-12-16 16:41:49.126197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.591 [2024-12-16 16:41:49.126365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.591 [2024-12-16 16:41:49.126374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.591 [2024-12-16 16:41:49.126380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.591 [2024-12-16 16:41:49.126387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.591 [2024-12-16 16:41:49.138385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.591 [2024-12-16 16:41:49.138756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.591 [2024-12-16 16:41:49.138771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.591 [2024-12-16 16:41:49.138779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.591 [2024-12-16 16:41:49.138946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.591 [2024-12-16 16:41:49.139119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.591 [2024-12-16 16:41:49.139128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.591 [2024-12-16 16:41:49.139134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.591 [2024-12-16 16:41:49.139139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.591 [2024-12-16 16:41:49.151189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.591 [2024-12-16 16:41:49.151613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.591 [2024-12-16 16:41:49.151633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.591 [2024-12-16 16:41:49.151640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.591 [2024-12-16 16:41:49.151807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.591 [2024-12-16 16:41:49.151975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.591 [2024-12-16 16:41:49.151982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.591 [2024-12-16 16:41:49.151988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.591 [2024-12-16 16:41:49.151994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.591 [2024-12-16 16:41:49.164088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.591 [2024-12-16 16:41:49.164386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.591 [2024-12-16 16:41:49.164402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.591 [2024-12-16 16:41:49.164409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.591 [2024-12-16 16:41:49.164577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.591 [2024-12-16 16:41:49.164745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.591 [2024-12-16 16:41:49.164753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.591 [2024-12-16 16:41:49.164759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.591 [2024-12-16 16:41:49.164765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.591 [2024-12-16 16:41:49.176919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.591 [2024-12-16 16:41:49.177283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.591 [2024-12-16 16:41:49.177300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.591 [2024-12-16 16:41:49.177307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.591 [2024-12-16 16:41:49.177480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.591 [2024-12-16 16:41:49.177652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.591 [2024-12-16 16:41:49.177660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.591 [2024-12-16 16:41:49.177666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.591 [2024-12-16 16:41:49.177672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.591 [2024-12-16 16:41:49.190003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.591 [2024-12-16 16:41:49.190373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.591 [2024-12-16 16:41:49.190390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.591 [2024-12-16 16:41:49.190397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.591 [2024-12-16 16:41:49.190574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.591 [2024-12-16 16:41:49.190749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.591 [2024-12-16 16:41:49.190757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.591 [2024-12-16 16:41:49.190763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.591 [2024-12-16 16:41:49.190769] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.203122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.203469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.203485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.203492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.203660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.203827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.203834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.203840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.203846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.216107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.216510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.216526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.216533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.216700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.216868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.216876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.216882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.216887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.228860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.229259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.229275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.229282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.229440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.229599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.229606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.229616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.229621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.241663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.241989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.242005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.242011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.242203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.242385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.242392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.242399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.242404] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.254526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.254936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.254952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.254959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.255134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.255302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.255310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.255315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.255322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.267384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.267769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.267784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.267790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.267949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.268113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.268137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.268144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.268150] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.280169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.280559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.280574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.280581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.280739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.280898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.280905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.280911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.280917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.851 [2024-12-16 16:41:49.292987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.851 [2024-12-16 16:41:49.293397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.851 [2024-12-16 16:41:49.293413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.851 [2024-12-16 16:41:49.293420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.851 [2024-12-16 16:41:49.293588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.851 [2024-12-16 16:41:49.293756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.851 [2024-12-16 16:41:49.293764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.851 [2024-12-16 16:41:49.293770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.851 [2024-12-16 16:41:49.293776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.305752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.306142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.306157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.306164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.306323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.306481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.306488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.306494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.306500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.318487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.318891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.318942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.318966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.319566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.320105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.320113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.320119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.320125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.331279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.331688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.331703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.331710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.331868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.332026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.332034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.332040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.332045] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.344046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.344472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.344489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.344496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.344664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.344832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.344839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.344845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.344851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.356890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.357319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.357336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.357343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.357519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.357688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.357696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.357702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.357708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.369622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.370040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.370083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.370121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.370706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.371221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.371229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.371236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.371241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.382402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.382837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.382884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.382908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.383334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.383503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.383511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.383518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.383524] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.395199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.395584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.395601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.395608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.395767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.395925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.395933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.395942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.395948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.407936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.408349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.408365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.408372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.408539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.408707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.408714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.408721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.408726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.420726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.421142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.421157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.421164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.421332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.421500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.421508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.421514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.421520] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.433702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.434122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.434139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.434146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.434319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.434491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.434500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.434506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.434512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:00.852 [2024-12-16 16:41:49.446686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:00.852 [2024-12-16 16:41:49.447084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.852 [2024-12-16 16:41:49.447107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:00.852 [2024-12-16 16:41:49.447114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:00.852 [2024-12-16 16:41:49.447287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:00.852 [2024-12-16 16:41:49.447459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:00.852 [2024-12-16 16:41:49.447467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:00.852 [2024-12-16 16:41:49.447474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:00.852 [2024-12-16 16:41:49.447480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.111 [2024-12-16 16:41:49.459754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.111 [2024-12-16 16:41:49.460081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.111 [2024-12-16 16:41:49.460102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.111 [2024-12-16 16:41:49.460109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.111 [2024-12-16 16:41:49.460277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.111 [2024-12-16 16:41:49.460444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.111 [2024-12-16 16:41:49.460451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.111 [2024-12-16 16:41:49.460457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.111 [2024-12-16 16:41:49.460463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.111 [2024-12-16 16:41:49.472498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.111 [2024-12-16 16:41:49.472888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.111 [2024-12-16 16:41:49.472903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.111 [2024-12-16 16:41:49.472910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.111 [2024-12-16 16:41:49.473068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.111 [2024-12-16 16:41:49.473256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.111 [2024-12-16 16:41:49.473265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.111 [2024-12-16 16:41:49.473271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.111 [2024-12-16 16:41:49.473277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.111 [2024-12-16 16:41:49.485288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.111 [2024-12-16 16:41:49.485669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.111 [2024-12-16 16:41:49.485687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.111 [2024-12-16 16:41:49.485694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.111 [2024-12-16 16:41:49.485862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.111 [2024-12-16 16:41:49.486029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.111 [2024-12-16 16:41:49.486036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.111 [2024-12-16 16:41:49.486043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.111 [2024-12-16 16:41:49.486048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.111 [2024-12-16 16:41:49.498034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.111 [2024-12-16 16:41:49.498367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.111 [2024-12-16 16:41:49.498383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.111 [2024-12-16 16:41:49.498390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.111 [2024-12-16 16:41:49.498557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.111 [2024-12-16 16:41:49.498724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.111 [2024-12-16 16:41:49.498732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.111 [2024-12-16 16:41:49.498738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.111 [2024-12-16 16:41:49.498744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.510871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.511259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.511274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.511281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.511439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.511598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.511605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.511611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.511617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.523701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.524113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.524130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.524137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.524304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.524475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.524483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.524489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.524495] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.536442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.536830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.536846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.536852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.537012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.537196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.537205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.537211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.537217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.549204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.549645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.549652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.549820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.549987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.549995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.550001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.550007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.562040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.562468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.562475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.562643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.562819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.562826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.562835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.562841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.574825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.575173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.575216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.575239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.575754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.575913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.575921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.575927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.575932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.587575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.587970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.587986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.587993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.588176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.588344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.588352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.588358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.588363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.600329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.600722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.600737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.600744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.600903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.601061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.601068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.601074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.601079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.613174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.613585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.613600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.613607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.613775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.613942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.613949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.613955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.613961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.626010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.626423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.626467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.626489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.626961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.627134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.627143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.627150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.627156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.638829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.639262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.639280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.639287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.639456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.639623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.639631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.639637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.639643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.651683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.652110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.652127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.652138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.652307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.652475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.652482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.652488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.652494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.664587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.665010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.665027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.665034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.665209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.665377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.665387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.665393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.665399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.677568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.678003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.678046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.678069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.678664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.679238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.679247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.679253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.679259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.690483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.690794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.690811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.690818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.690990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.691173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.691182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.691189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.691195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.703545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.703944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.703960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.703967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.704140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.704308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.704316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.704322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.704328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.112 [2024-12-16 16:41:49.716721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.112 [2024-12-16 16:41:49.717178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.112 [2024-12-16 16:41:49.717199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.112 [2024-12-16 16:41:49.717207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.112 [2024-12-16 16:41:49.717410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.112 [2024-12-16 16:41:49.717603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.112 [2024-12-16 16:41:49.717613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.112 [2024-12-16 16:41:49.717620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.112 [2024-12-16 16:41:49.717626] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 [2024-12-16 16:41:49.729840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.730279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.730296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.730303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.730477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.730649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.730657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.437 [2024-12-16 16:41:49.730666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.437 [2024-12-16 16:41:49.730673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 [2024-12-16 16:41:49.742921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.743232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.743249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.743258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.743441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.743625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.743634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.437 [2024-12-16 16:41:49.743641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.437 [2024-12-16 16:41:49.743647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 7516.00 IOPS, 29.36 MiB/s [2024-12-16T15:41:50.046Z] [2024-12-16 16:41:49.756169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.756593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.756617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.756800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.756983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.756992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.437 [2024-12-16 16:41:49.756999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.437 [2024-12-16 16:41:49.757005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 [2024-12-16 16:41:49.769207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.769602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.769619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.769626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.769799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.769971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.769979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.437 [2024-12-16 16:41:49.769985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.437 [2024-12-16 16:41:49.769991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 [2024-12-16 16:41:49.782206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.782508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.782524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.782532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.782704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.782877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.782885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.437 [2024-12-16 16:41:49.782891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.437 [2024-12-16 16:41:49.782897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 [2024-12-16 16:41:49.795112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.795448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.795463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.795471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.795638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.795807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.795815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.437 [2024-12-16 16:41:49.795821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.437 [2024-12-16 16:41:49.795826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.437 [2024-12-16 16:41:49.807924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.437 [2024-12-16 16:41:49.808290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.437 [2024-12-16 16:41:49.808306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.437 [2024-12-16 16:41:49.808313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.437 [2024-12-16 16:41:49.808481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.437 [2024-12-16 16:41:49.808648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.437 [2024-12-16 16:41:49.808656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.808662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.808668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.820807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.821196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.821247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.821271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.821856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.822452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.822474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.822481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.822487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.833663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.834076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.834092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.834106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.834273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.834441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.834449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.834455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.834461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.846494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.846908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.846925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.846932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.847104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.847272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.847280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.847286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.847292] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.859500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.859958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.859975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.859982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.860158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.860327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.860335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.860341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.860347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.872444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.872723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.872738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.872746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.872913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.873081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.873089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.873100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.873107] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.885272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.885674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.885717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.885740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.886332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.886790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.886799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.886805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.886812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.898161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.898550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.898594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.898616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.899214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.899643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.899651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.899660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.899666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.911066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.911486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.911503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.911510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.911677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.911845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.911853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.911859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.911865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.923961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.924250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.924267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.924274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.924441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.924609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.924617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.924623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.924628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.438 [2024-12-16 16:41:49.936765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.438 [2024-12-16 16:41:49.937201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.438 [2024-12-16 16:41:49.937246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.438 [2024-12-16 16:41:49.937270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.438 [2024-12-16 16:41:49.937692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.438 [2024-12-16 16:41:49.937853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.438 [2024-12-16 16:41:49.937860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.438 [2024-12-16 16:41:49.937866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.438 [2024-12-16 16:41:49.937872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.439 [2024-12-16 16:41:49.949623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.439 [2024-12-16 16:41:49.950037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.439 [2024-12-16 16:41:49.950053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.439 [2024-12-16 16:41:49.950061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.439 [2024-12-16 16:41:49.950239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.439 [2024-12-16 16:41:49.950412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.439 [2024-12-16 16:41:49.950420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.439 [2024-12-16 16:41:49.950426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.439 [2024-12-16 16:41:49.950432] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.439 [2024-12-16 16:41:49.962600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.439 [2024-12-16 16:41:49.963028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.439 [2024-12-16 16:41:49.963044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.439 [2024-12-16 16:41:49.963051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.439 [2024-12-16 16:41:49.963252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.439 [2024-12-16 16:41:49.963426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.439 [2024-12-16 16:41:49.963434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.439 [2024-12-16 16:41:49.963440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.439 [2024-12-16 16:41:49.963446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.439 [2024-12-16 16:41:49.975499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.439 [2024-12-16 16:41:49.975928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.439 [2024-12-16 16:41:49.975972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.439 [2024-12-16 16:41:49.975994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.439 [2024-12-16 16:41:49.976577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.439 [2024-12-16 16:41:49.976746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.439 [2024-12-16 16:41:49.976754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.439 [2024-12-16 16:41:49.976761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.439 [2024-12-16 16:41:49.976767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.439 [2024-12-16 16:41:49.988571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.439 [2024-12-16 16:41:49.988977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.439 [2024-12-16 16:41:49.988999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.439 [2024-12-16 16:41:49.989007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.439 [2024-12-16 16:41:49.989186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.439 [2024-12-16 16:41:49.989359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.439 [2024-12-16 16:41:49.989367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.439 [2024-12-16 16:41:49.989374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.439 [2024-12-16 16:41:49.989380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.439 [2024-12-16 16:41:50.001599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.439 [2024-12-16 16:41:50.001964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.439 [2024-12-16 16:41:50.001981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.439 [2024-12-16 16:41:50.001989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.439 [2024-12-16 16:41:50.002166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.439 [2024-12-16 16:41:50.002339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.439 [2024-12-16 16:41:50.002347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.439 [2024-12-16 16:41:50.002353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.439 [2024-12-16 16:41:50.002359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.760 [2024-12-16 16:41:50.014732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.760 [2024-12-16 16:41:50.015148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.760 [2024-12-16 16:41:50.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.760 [2024-12-16 16:41:50.015175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.760 [2024-12-16 16:41:50.015350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.760 [2024-12-16 16:41:50.015524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.760 [2024-12-16 16:41:50.015533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.760 [2024-12-16 16:41:50.015539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.760 [2024-12-16 16:41:50.015546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.760 [2024-12-16 16:41:50.028354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.760 [2024-12-16 16:41:50.028774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.760 [2024-12-16 16:41:50.028797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.760 [2024-12-16 16:41:50.028809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.760 [2024-12-16 16:41:50.029033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.760 [2024-12-16 16:41:50.029258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.760 [2024-12-16 16:41:50.029270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.760 [2024-12-16 16:41:50.029277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.760 [2024-12-16 16:41:50.029285] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.760 [2024-12-16 16:41:50.041366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.760 [2024-12-16 16:41:50.041821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.760 [2024-12-16 16:41:50.041839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.760 [2024-12-16 16:41:50.041848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.760 [2024-12-16 16:41:50.042022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.760 [2024-12-16 16:41:50.042201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.760 [2024-12-16 16:41:50.042211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.760 [2024-12-16 16:41:50.042218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.760 [2024-12-16 16:41:50.042225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.054449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.054818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.054835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.054842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.055015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.055195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.055204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.055211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.055217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.067440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.067885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.067926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.067952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.068547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.068827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.068835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.068845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.068852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.080390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.080835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.080851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.080859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.081027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.081200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.081208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.081214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.081220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.093498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.093807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.093824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.093832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.094005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.094185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.094195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.094201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.094208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.106560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.107004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.107020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.107027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.107207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.107380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.107389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.107395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.107401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.119471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.119911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.119927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.119934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.120113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.120286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.120294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.120301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.120306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.132532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.132950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.132967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.132974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.133154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.133327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.133335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.133341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.133347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.145526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.145961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.145977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.145984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.146166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.146339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.146347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.146353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.146359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.158402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.158741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.158760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.158768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.158941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.159121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.159130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.159136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.159143] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.171355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.171749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.761 [2024-12-16 16:41:50.171766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.761 [2024-12-16 16:41:50.171773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.761 [2024-12-16 16:41:50.171946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.761 [2024-12-16 16:41:50.172124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.761 [2024-12-16 16:41:50.172134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.761 [2024-12-16 16:41:50.172140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.761 [2024-12-16 16:41:50.172146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.761 [2024-12-16 16:41:50.184399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.761 [2024-12-16 16:41:50.184733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.184749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.184756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.184929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.185110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.185119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.185126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.185132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.197516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.197932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.197975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.197998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.198449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.198623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.198631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.198637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.198643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.210537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.210977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.210994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.211001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.211181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.211354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.211362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.211369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.211375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.223621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.223988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.224031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.224054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.224567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.224741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.224749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.224755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.224761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.236595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.237005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.237021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.237029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.237209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.237382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.237390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.237400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.237406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.249609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.249956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.249972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.249979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.250159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.250332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.250340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.250346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.250352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.262519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.262963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.262979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.262987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.263166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.263339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.263348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.263354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.263360] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.275483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.275951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.275967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.275974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.276155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.276328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.276336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.276342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.276348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.288477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.288792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.288808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.288815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.288983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.289174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.289182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.289189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.289195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.301395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.301832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.301847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.301855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.762 [2024-12-16 16:41:50.302027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.762 [2024-12-16 16:41:50.302207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.762 [2024-12-16 16:41:50.302216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.762 [2024-12-16 16:41:50.302222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.762 [2024-12-16 16:41:50.302229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.762 [2024-12-16 16:41:50.314349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.762 [2024-12-16 16:41:50.314723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.762 [2024-12-16 16:41:50.314739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.762 [2024-12-16 16:41:50.314746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.763 [2024-12-16 16:41:50.314919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.763 [2024-12-16 16:41:50.315091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.763 [2024-12-16 16:41:50.315106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.763 [2024-12-16 16:41:50.315112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.763 [2024-12-16 16:41:50.315118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.763 [2024-12-16 16:41:50.327245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.763 [2024-12-16 16:41:50.327674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.763 [2024-12-16 16:41:50.327692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.763 [2024-12-16 16:41:50.327699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.763 [2024-12-16 16:41:50.327866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.763 [2024-12-16 16:41:50.328034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.763 [2024-12-16 16:41:50.328042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.763 [2024-12-16 16:41:50.328048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.763 [2024-12-16 16:41:50.328053] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.763 [2024-12-16 16:41:50.340292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.763 [2024-12-16 16:41:50.340714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.763 [2024-12-16 16:41:50.340731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.763 [2024-12-16 16:41:50.340738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.763 [2024-12-16 16:41:50.340910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.763 [2024-12-16 16:41:50.341082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.763 [2024-12-16 16:41:50.341090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.763 [2024-12-16 16:41:50.341106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.763 [2024-12-16 16:41:50.341113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:01.763 [2024-12-16 16:41:50.353341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:01.763 [2024-12-16 16:41:50.353693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.763 [2024-12-16 16:41:50.353709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:01.763 [2024-12-16 16:41:50.353716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:01.763 [2024-12-16 16:41:50.353889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:01.763 [2024-12-16 16:41:50.354061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:01.763 [2024-12-16 16:41:50.354068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:01.763 [2024-12-16 16:41:50.354075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:01.763 [2024-12-16 16:41:50.354081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.023 [2024-12-16 16:41:50.366487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.023 [2024-12-16 16:41:50.366919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.023 [2024-12-16 16:41:50.366935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.023 [2024-12-16 16:41:50.366943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.023 [2024-12-16 16:41:50.367124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.023 [2024-12-16 16:41:50.367297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.023 [2024-12-16 16:41:50.367305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.023 [2024-12-16 16:41:50.367311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.023 [2024-12-16 16:41:50.367317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.023 [2024-12-16 16:41:50.379523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.023 [2024-12-16 16:41:50.379948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.023 [2024-12-16 16:41:50.379966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.023 [2024-12-16 16:41:50.379973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.023 [2024-12-16 16:41:50.380153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.023 [2024-12-16 16:41:50.380327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.023 [2024-12-16 16:41:50.380335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.023 [2024-12-16 16:41:50.380341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.023 [2024-12-16 16:41:50.380347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.023 [2024-12-16 16:41:50.392552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.023 [2024-12-16 16:41:50.392988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.023 [2024-12-16 16:41:50.393005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.023 [2024-12-16 16:41:50.393012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.023 [2024-12-16 16:41:50.393191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.023 [2024-12-16 16:41:50.393365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.023 [2024-12-16 16:41:50.393372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.023 [2024-12-16 16:41:50.393379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.393385] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.405587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.406018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.406034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.406041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.406222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.406395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.406404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.406413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.406419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.418529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.418971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.419004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.419029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.419614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.419788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.419795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.419802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.419808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.431523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.431955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.431970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.431977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.432157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.432330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.432338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.432345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.432351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.444507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.444917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.444961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.444984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.445584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.446017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.446025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.446031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.446037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.457507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.457954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.457998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.458021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.458568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.458962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.458979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.458993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.459005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.472384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.472906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.472927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.472937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.473203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.473461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.473473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.473482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.473490] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.485511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.485942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.485957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.485964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.486144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.486317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.486325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.486331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.486337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.498552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.498961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.499024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.499048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.499658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.499832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.499840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.499846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.499852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.511636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.512067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.512083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.512090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.512269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.512442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.512450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.512456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.512462] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.524548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.024 [2024-12-16 16:41:50.524997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.024 [2024-12-16 16:41:50.525013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.024 [2024-12-16 16:41:50.525020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.024 [2024-12-16 16:41:50.525200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.024 [2024-12-16 16:41:50.525373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.024 [2024-12-16 16:41:50.525381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.024 [2024-12-16 16:41:50.525387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.024 [2024-12-16 16:41:50.525393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.024 [2024-12-16 16:41:50.537603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.538035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.538058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.538241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.538414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.538422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.538428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.538434] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.550646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.551078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.551100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.551108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.551280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.551452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.551460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.551466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.551472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.563671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.564092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.564123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.564295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.564467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.564475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.564481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.564487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.576752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.577169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.577214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.577237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.577822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.578322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.578331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.578340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.578347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.589779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.590202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.590219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.590226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.590399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.590571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.590579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.590586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.590591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.602774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.603199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.603217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.603224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.603397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.603570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.603578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.603584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.603590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.615771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.616208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.616252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.616275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.616857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.617370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.617379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.617385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.617391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.025 [2024-12-16 16:41:50.628743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.025 [2024-12-16 16:41:50.629178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.025 [2024-12-16 16:41:50.629194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.025 [2024-12-16 16:41:50.629201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.025 [2024-12-16 16:41:50.629374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.025 [2024-12-16 16:41:50.629547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.025 [2024-12-16 16:41:50.629555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.025 [2024-12-16 16:41:50.629561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.025 [2024-12-16 16:41:50.629567] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.641768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.642144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.642161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.642168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.642341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.642513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.642521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.642527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.642533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.654654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.655116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.655160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.655182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.655764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.656330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.656339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.656345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.656351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.667779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.668210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.668217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.668390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.668561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.668569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.668576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.668582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.680717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.681182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.681227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.681250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.681832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.682429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.682438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.682445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.682451] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.693720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.694127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.694144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.694151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.694324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.694496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.694515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.694522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.694528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.706795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.707178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.707195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.707202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.707378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.707551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.707560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.707566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.707571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.719732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.720177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.720219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.720244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.720775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.720948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.720956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.720962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.720968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.734898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.735406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.735428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.735438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.735693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.735948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.735959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.735968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.735977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 6012.80 IOPS, 23.49 MiB/s [2024-12-16T15:41:50.895Z] [2024-12-16 16:41:50.749298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.749711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.749727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.749735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.749908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.286 [2024-12-16 16:41:50.750080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.286 [2024-12-16 16:41:50.750091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.286 [2024-12-16 16:41:50.750104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.286 [2024-12-16 16:41:50.750110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.286 [2024-12-16 16:41:50.762316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.286 [2024-12-16 16:41:50.762741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.286 [2024-12-16 16:41:50.762757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.286 [2024-12-16 16:41:50.762765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.286 [2024-12-16 16:41:50.762937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.763116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.763124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.763131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.763137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.775290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.775737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.775754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.775761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.775934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.776112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.776121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.776127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.776133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.788342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.788789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.788832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.788854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.789452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.789803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.789811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.789817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.789823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.801235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.801656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.801685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.801709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.802307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.802896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.802904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.802910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.802916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.814171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.814588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.814604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.814611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.814784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.814956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.814964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.814970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.814976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.827051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.827501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.827518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.827525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.827698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.827871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.827879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.827885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.827891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.839847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.840262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.840281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.840288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.840446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.840605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.840612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.840617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.840623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.852679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.853098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.853113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.853120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.853278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.853437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.853444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.853450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.853455] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.865486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.865921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.865964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.865988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.866590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.866759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.866767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.866772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.866778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.878252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.878589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.878604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.878610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.287 [2024-12-16 16:41:50.878772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.287 [2024-12-16 16:41:50.878931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.287 [2024-12-16 16:41:50.878938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.287 [2024-12-16 16:41:50.878944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.287 [2024-12-16 16:41:50.878949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.287 [2024-12-16 16:41:50.891326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.287 [2024-12-16 16:41:50.891709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.287 [2024-12-16 16:41:50.891725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.287 [2024-12-16 16:41:50.891732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.288 [2024-12-16 16:41:50.891904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.892077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.892085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.892091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.892104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.904198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.904612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.904628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.904634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.904792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.904951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.904958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.904964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.904969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.917010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.917413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.917428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.917435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.917603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.917770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.917780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.917787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.917793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.929815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.930157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.930173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.930180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.930339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.930498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.930505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.930511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.930517] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.942596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.942998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.943014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.943020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.943209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.943377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.943385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.943391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.943397] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.955439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.955863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.955906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.955928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.956376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.956545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.956553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.956558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.956564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.968271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.968688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.968703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.968710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.968869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.969027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.969034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.969040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.969046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.981138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.981464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.981479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.981485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.981644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.981802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.548 [2024-12-16 16:41:50.981810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.548 [2024-12-16 16:41:50.981815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.548 [2024-12-16 16:41:50.981821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.548 [2024-12-16 16:41:50.993891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.548 [2024-12-16 16:41:50.994334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.548 [2024-12-16 16:41:50.994351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.548 [2024-12-16 16:41:50.994359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.548 [2024-12-16 16:41:50.994531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.548 [2024-12-16 16:41:50.994704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:50.994712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:50.994718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:50.994724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.006890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.007181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.007200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.007208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.007380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.007552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.007560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.007566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.007572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.019880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.020315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.020361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.020384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.020969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.021229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.021237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.021243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.021249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.032668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.033092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.033146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.033169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.033587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.033746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.033753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.033759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.033764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.045510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.045929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.045944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.045951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.046119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.046304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.046312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.046318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.046324] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.058303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.058713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.058735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.058894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.059052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.059059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.059065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.059071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.071185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.071611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.071628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.071636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.071804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.071971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.071980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.071986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.071991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.083981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.084402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.084446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.084469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.085048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.085447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.085465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.085485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.085498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.098566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.099062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.099118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.099142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.099660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.099914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.099925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.099934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.099943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.111612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.112017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.112033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.112040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.112217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.112390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.112398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.549 [2024-12-16 16:41:51.112404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.549 [2024-12-16 16:41:51.112410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.549 [2024-12-16 16:41:51.124612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.549 [2024-12-16 16:41:51.124898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.549 [2024-12-16 16:41:51.124914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.549 [2024-12-16 16:41:51.124921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.549 [2024-12-16 16:41:51.125100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.549 [2024-12-16 16:41:51.125274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.549 [2024-12-16 16:41:51.125282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.550 [2024-12-16 16:41:51.125289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.550 [2024-12-16 16:41:51.125295] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.550 [2024-12-16 16:41:51.137664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.550 [2024-12-16 16:41:51.138052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.550 [2024-12-16 16:41:51.138069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.550 [2024-12-16 16:41:51.138077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.550 [2024-12-16 16:41:51.138267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.550 [2024-12-16 16:41:51.138450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.550 [2024-12-16 16:41:51.138459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.550 [2024-12-16 16:41:51.138465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.550 [2024-12-16 16:41:51.138472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.550 [2024-12-16 16:41:51.150898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.550 [2024-12-16 16:41:51.151273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.550 [2024-12-16 16:41:51.151290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.550 [2024-12-16 16:41:51.151298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.550 [2024-12-16 16:41:51.151481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.550 [2024-12-16 16:41:51.151663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.550 [2024-12-16 16:41:51.151672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.550 [2024-12-16 16:41:51.151679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.550 [2024-12-16 16:41:51.151685] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.810 [2024-12-16 16:41:51.164098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.810 [2024-12-16 16:41:51.164515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.810 [2024-12-16 16:41:51.164531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.810 [2024-12-16 16:41:51.164538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.810 [2024-12-16 16:41:51.164722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.810 [2024-12-16 16:41:51.164906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.810 [2024-12-16 16:41:51.164914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.810 [2024-12-16 16:41:51.164921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.810 [2024-12-16 16:41:51.164927] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.810 [2024-12-16 16:41:51.177220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.810 [2024-12-16 16:41:51.177624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.810 [2024-12-16 16:41:51.177643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.810 [2024-12-16 16:41:51.177650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.810 [2024-12-16 16:41:51.177823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.810 [2024-12-16 16:41:51.177996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.810 [2024-12-16 16:41:51.178003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.810 [2024-12-16 16:41:51.178010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.810 [2024-12-16 16:41:51.178016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.810 [2024-12-16 16:41:51.190300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.810 [2024-12-16 16:41:51.190725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.810 [2024-12-16 16:41:51.190741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.810 [2024-12-16 16:41:51.190748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.810 [2024-12-16 16:41:51.190921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.810 [2024-12-16 16:41:51.191099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.810 [2024-12-16 16:41:51.191108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.810 [2024-12-16 16:41:51.191114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.810 [2024-12-16 16:41:51.191120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.810 [2024-12-16 16:41:51.203533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.810 [2024-12-16 16:41:51.203951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.810 [2024-12-16 16:41:51.203967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.810 [2024-12-16 16:41:51.203975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.810 [2024-12-16 16:41:51.204163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.810 [2024-12-16 16:41:51.204346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.810 [2024-12-16 16:41:51.204354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.810 [2024-12-16 16:41:51.204361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.810 [2024-12-16 16:41:51.204367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.810 [2024-12-16 16:41:51.216913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.810 [2024-12-16 16:41:51.217379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.217398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.217406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.217619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.217829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.217839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.217846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.217854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.230450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.230786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.230829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.230852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.231453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.231704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.231712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.231719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.231725] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.243604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.243925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.243941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.243948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.244126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.244320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.244328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.244335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.244341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.256600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.258122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.258144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.258153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.258332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.258506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.258514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.258524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.258530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.269493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.269777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.269794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.269802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.269970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.270145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.270154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.270160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.270167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.282399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.282692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.282708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.282715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.282882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.283050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.283057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.283063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.283069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.295180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.295515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.295532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.295539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.295706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.295873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.295881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.295887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.295893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.308167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.308502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.308517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.308524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.308683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.308842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.308849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.308855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.308860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.320984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.321333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.321349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.321356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.321524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.321691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.321699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.321705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.321711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.333893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.334193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.334209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.811 [2024-12-16 16:41:51.334216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.811 [2024-12-16 16:41:51.334384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.811 [2024-12-16 16:41:51.334551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.811 [2024-12-16 16:41:51.334558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.811 [2024-12-16 16:41:51.334565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.811 [2024-12-16 16:41:51.334571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.811 [2024-12-16 16:41:51.346816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.811 [2024-12-16 16:41:51.347117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.811 [2024-12-16 16:41:51.347137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.812 [2024-12-16 16:41:51.347145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.812 [2024-12-16 16:41:51.347314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.812 [2024-12-16 16:41:51.347481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.812 [2024-12-16 16:41:51.347489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.812 [2024-12-16 16:41:51.347495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.812 [2024-12-16 16:41:51.347500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.812 [2024-12-16 16:41:51.359784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.812 [2024-12-16 16:41:51.360230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.812 [2024-12-16 16:41:51.360276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.812 [2024-12-16 16:41:51.360298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.812 [2024-12-16 16:41:51.360888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.812 [2024-12-16 16:41:51.361057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.812 [2024-12-16 16:41:51.361065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.812 [2024-12-16 16:41:51.361071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.812 [2024-12-16 16:41:51.361077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.812 [2024-12-16 16:41:51.372815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.812 [2024-12-16 16:41:51.373092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.812 [2024-12-16 16:41:51.373114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.812 [2024-12-16 16:41:51.373137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.812 [2024-12-16 16:41:51.373310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.812 [2024-12-16 16:41:51.373483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.812 [2024-12-16 16:41:51.373491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.812 [2024-12-16 16:41:51.373498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.812 [2024-12-16 16:41:51.373503] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.812 [2024-12-16 16:41:51.385729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.812 [2024-12-16 16:41:51.386143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.812 [2024-12-16 16:41:51.386190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.812 [2024-12-16 16:41:51.386214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.812 [2024-12-16 16:41:51.386807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.812 [2024-12-16 16:41:51.387038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.812 [2024-12-16 16:41:51.387046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.812 [2024-12-16 16:41:51.387053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.812 [2024-12-16 16:41:51.387059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.812 [2024-12-16 16:41:51.398553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.812 [2024-12-16 16:41:51.398848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.812 [2024-12-16 16:41:51.398864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.812 [2024-12-16 16:41:51.398871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.812 [2024-12-16 16:41:51.399040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.812 [2024-12-16 16:41:51.399214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.812 [2024-12-16 16:41:51.399222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.812 [2024-12-16 16:41:51.399229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.812 [2024-12-16 16:41:51.399234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:02.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1201556 Killed "${NVMF_APP[@]}" "$@" 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1202883 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1202883 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1202883 ']' 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.812 [2024-12-16 16:41:51.411578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:02.812 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.812 [2024-12-16 16:41:51.411929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.812 [2024-12-16 16:41:51.411946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:02.812 [2024-12-16 16:41:51.411956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:02.812 [2024-12-16 16:41:51.412135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:02.812 [2024-12-16 16:41:51.412308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:02.812 [2024-12-16 16:41:51.412316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:02.812 [2024-12-16 16:41:51.412322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:02.812 [2024-12-16 16:41:51.412328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.072 [2024-12-16 16:41:51.424550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.072 [2024-12-16 16:41:51.424911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.072 [2024-12-16 16:41:51.424927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.072 [2024-12-16 16:41:51.424934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.072 [2024-12-16 16:41:51.425113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.072 [2024-12-16 16:41:51.425286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.072 [2024-12-16 16:41:51.425294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.072 [2024-12-16 16:41:51.425302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.072 [2024-12-16 16:41:51.425308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.072 [2024-12-16 16:41:51.437535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.072 [2024-12-16 16:41:51.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.072 [2024-12-16 16:41:51.437984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.072 [2024-12-16 16:41:51.437991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.072 [2024-12-16 16:41:51.438170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.072 [2024-12-16 16:41:51.438343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.072 [2024-12-16 16:41:51.438352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.072 [2024-12-16 16:41:51.438358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.072 [2024-12-16 16:41:51.438364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.072 [2024-12-16 16:41:51.450580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.072 [2024-12-16 16:41:51.450859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.450875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.450882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.451051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.451230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.451239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.451244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.451250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.461305] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:03.073 [2024-12-16 16:41:51.461343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.073 [2024-12-16 16:41:51.463681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.464020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.464037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.464044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.464224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.464399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.464407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.464414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.464420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.476689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.477104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.477121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.477128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.477296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.477464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.477472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.477478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.477484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.489663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.489937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.489953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.489960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.490135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.490306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.490314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.490321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.490326] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.502692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.503061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.503077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.503084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.503262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.503435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.503443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.503450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.503456] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.515661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.516013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.516028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.516035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.516213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.516386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.516394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.516400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.516406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.528639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.529084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.529106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.529114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.529288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.529462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.529470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.529480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.529486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.541394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:03.073 [2024-12-16 16:41:51.541550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.541914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.541930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.541937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.542111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.542281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.542289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.542296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.542303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.554565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.555004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.555021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.073 [2024-12-16 16:41:51.555031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.073 [2024-12-16 16:41:51.555206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.073 [2024-12-16 16:41:51.555376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.073 [2024-12-16 16:41:51.555384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.073 [2024-12-16 16:41:51.555390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.073 [2024-12-16 16:41:51.555396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.073 [2024-12-16 16:41:51.562809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.073 [2024-12-16 16:41:51.562836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.073 [2024-12-16 16:41:51.562843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.073 [2024-12-16 16:41:51.562849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.073 [2024-12-16 16:41:51.562855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.073 [2024-12-16 16:41:51.564026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.073 [2024-12-16 16:41:51.564135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.073 [2024-12-16 16:41:51.564135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:03.073 [2024-12-16 16:41:51.567603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.073 [2024-12-16 16:41:51.568056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.073 [2024-12-16 16:41:51.568075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.568089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.568270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.568445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.568454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.568462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.568470] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.580712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.581176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.581198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.581208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.581385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.581560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.581569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.581576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.581583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.593803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.594235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.594256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.594265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.594440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.594614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.594622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.594629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.594636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.606843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.607269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.607290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.607298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.607473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.607654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.607662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.607670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.607677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.619881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.620321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.620341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.620349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.620524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.620698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.620707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.620714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.620721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.632920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.633337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.633354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.633362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.633536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.633709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.633717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.633723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.633729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.645935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.646342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.646358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.646366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.646550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.646723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.646732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.646742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.646749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 [2024-12-16 16:41:51.658947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.659285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.659301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.659308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.659480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.659653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.659661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.659667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.659673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.074 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.074 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:03.074 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:03.074 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.074 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.074 [2024-12-16 16:41:51.672032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.074 [2024-12-16 16:41:51.672373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.074 [2024-12-16 16:41:51.672389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.074 [2024-12-16 16:41:51.672397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.074 [2024-12-16 16:41:51.672569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.074 [2024-12-16 16:41:51.672741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.074 [2024-12-16 16:41:51.672751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.074 [2024-12-16 16:41:51.672759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.074 [2024-12-16 16:41:51.672767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 [2024-12-16 16:41:51.685148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.685490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.685506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.685513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.685686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.685858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.685870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.685876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.685882] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 [2024-12-16 16:41:51.698260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.698640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.698656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.698663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.698835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.699007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.699015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.699021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.699027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.334 [2024-12-16 16:41:51.707023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.334 [2024-12-16 16:41:51.711236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.711565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.711581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.711588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.711761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.711933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.711940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.711947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.711952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.334 [2024-12-16 16:41:51.724316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.724748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.724768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.724775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.724951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.725129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.725138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.725144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.725151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 [2024-12-16 16:41:51.737343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.737672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.737688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.737695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.737868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.738041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.738049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.738055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.738061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 5010.67 IOPS, 19.57 MiB/s [2024-12-16T15:41:51.943Z] [2024-12-16 16:41:51.751736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.752071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.752088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.752101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.752274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.752448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.752456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.752462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.752468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.334 Malloc0 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.334 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.334 [2024-12-16 16:41:51.764827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.334 [2024-12-16 16:41:51.765231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-12-16 16:41:51.765248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8490 with addr=10.0.0.2, port=4420 00:36:03.334 [2024-12-16 16:41:51.765255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c8490 is same with the state(6) to be set 00:36:03.334 [2024-12-16 16:41:51.765428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c8490 (9): Bad file descriptor 00:36:03.334 [2024-12-16 16:41:51.765601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:03.334 [2024-12-16 16:41:51.765609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:03.334 [2024-12-16 16:41:51.765615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:03.334 [2024-12-16 16:41:51.765621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.335 [2024-12-16 16:41:51.777915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:03.335 [2024-12-16 16:41:51.778190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.335 16:41:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1201818 00:36:03.335 [2024-12-16 16:41:51.895207] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:05.202 5703.57 IOPS, 22.28 MiB/s [2024-12-16T15:41:55.188Z] 6433.50 IOPS, 25.13 MiB/s [2024-12-16T15:41:56.123Z] 7008.67 IOPS, 27.38 MiB/s [2024-12-16T15:41:57.056Z] 7456.00 IOPS, 29.12 MiB/s [2024-12-16T15:41:57.991Z] 7820.45 IOPS, 30.55 MiB/s [2024-12-16T15:41:58.925Z] 8124.08 IOPS, 31.73 MiB/s [2024-12-16T15:41:59.868Z] 8379.38 IOPS, 32.73 MiB/s [2024-12-16T15:42:00.802Z] 8589.79 IOPS, 33.55 MiB/s 00:36:12.193 Latency(us) 00:36:12.193 [2024-12-16T15:42:00.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.193 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:12.193 Verification LBA range: start 0x0 length 0x4000 00:36:12.193 Nvme1n1 : 15.00 8783.22 34.31 11242.99 0.00 6371.81 647.56 22344.66 00:36:12.193 [2024-12-16T15:42:00.802Z] =================================================================================================================== 00:36:12.193 [2024-12-16T15:42:00.802Z] Total : 8783.22 34.31 11242.99 0.00 6371.81 647.56 22344.66 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:12.451 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.452 rmmod nvme_tcp 00:36:12.452 rmmod nvme_fabrics 00:36:12.452 rmmod nvme_keyring 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1202883 ']' 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1202883 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1202883 ']' 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1202883 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.452 16:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1202883 00:36:12.452 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:12.452 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:12.452 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1202883' 00:36:12.452 killing process with pid 1202883 00:36:12.452 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1202883 00:36:12.452 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1202883 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.711 16:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.249 00:36:15.249 real 0m25.958s 00:36:15.249 user 1m0.779s 00:36:15.249 sys 0m6.689s 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:15.249 ************************************ 00:36:15.249 END TEST nvmf_bdevperf 00:36:15.249 ************************************ 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.249 ************************************ 00:36:15.249 START TEST nvmf_target_disconnect 00:36:15.249 ************************************ 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:15.249 * Looking for test storage... 00:36:15.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.249 --rc genhtml_branch_coverage=1 00:36:15.249 --rc genhtml_function_coverage=1 00:36:15.249 --rc genhtml_legend=1 00:36:15.249 --rc geninfo_all_blocks=1 00:36:15.249 --rc geninfo_unexecuted_blocks=1 00:36:15.249 00:36:15.249 ' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.249 --rc genhtml_branch_coverage=1 00:36:15.249 --rc genhtml_function_coverage=1 00:36:15.249 --rc genhtml_legend=1 00:36:15.249 --rc geninfo_all_blocks=1 00:36:15.249 --rc geninfo_unexecuted_blocks=1 00:36:15.249 00:36:15.249 ' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.249 --rc genhtml_branch_coverage=1 00:36:15.249 --rc genhtml_function_coverage=1 00:36:15.249 --rc genhtml_legend=1 00:36:15.249 --rc geninfo_all_blocks=1 00:36:15.249 --rc geninfo_unexecuted_blocks=1 00:36:15.249 00:36:15.249 ' 00:36:15.249 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.250 --rc genhtml_branch_coverage=1 00:36:15.250 --rc genhtml_function_coverage=1 00:36:15.250 --rc genhtml_legend=1 00:36:15.250 --rc geninfo_all_blocks=1 00:36:15.250 --rc geninfo_unexecuted_blocks=1 00:36:15.250 00:36:15.250 ' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.250 16:42:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.821 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:21.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:21.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:21.822 Found net devices under 0000:af:00.0: cvl_0_0 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:21.822 Found net devices under 0000:af:00.1: cvl_0_1 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:21.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:36:21.822 00:36:21.822 --- 10.0.0.2 ping statistics --- 00:36:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.822 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:36:21.822 00:36:21.822 --- 10.0.0.1 ping statistics --- 00:36:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.822 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:21.822 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 ************************************ 00:36:21.823 START TEST nvmf_target_disconnect_tc1 00:36:21.823 ************************************ 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:21.823 [2024-12-16 16:42:09.630379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.823 [2024-12-16 16:42:09.630494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2248c50 with addr=10.0.0.2, port=4420 00:36:21.823 [2024-12-16 16:42:09.630552] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:21.823 [2024-12-16 16:42:09.630578] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:21.823 [2024-12-16 16:42:09.630599] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:21.823 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:21.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:21.823 Initializing NVMe Controllers 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.823 00:36:21.823 real 0m0.123s 00:36:21.823 user 0m0.050s 00:36:21.823 sys 0m0.072s 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 ************************************ 00:36:21.823 END TEST nvmf_target_disconnect_tc1 00:36:21.823 ************************************ 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 ************************************ 00:36:21.823 START TEST nvmf_target_disconnect_tc2 00:36:21.823 ************************************ 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1208305 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1208305 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1208305 ']' 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 [2024-12-16 16:42:09.773469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:21.823 [2024-12-16 16:42:09.773514] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.823 [2024-12-16 16:42:09.853303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:21.823 [2024-12-16 16:42:09.876417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.823 [2024-12-16 16:42:09.876454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.823 [2024-12-16 16:42:09.876462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.823 [2024-12-16 16:42:09.876468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.823 [2024-12-16 16:42:09.876473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.823 [2024-12-16 16:42:09.877840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:21.823 [2024-12-16 16:42:09.877950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:21.823 [2024-12-16 16:42:09.877985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:21.823 [2024-12-16 16:42:09.877985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:21.823 16:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 Malloc0 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 [2024-12-16 16:42:10.046743] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.823 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.824 [2024-12-16 16:42:10.082992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1208524 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:21.824 16:42:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:23.740 16:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1208305 00:36:23.740 16:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Read completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.740 Write completed with error (sct=0, sc=8) 00:36:23.740 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 [2024-12-16 16:42:12.114603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 [2024-12-16 16:42:12.114800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 [2024-12-16 16:42:12.114991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Read completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 Write completed with error (sct=0, sc=8) 00:36:23.741 starting I/O failed 00:36:23.741 [2024-12-16 16:42:12.115183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:23.742 [2024-12-16 16:42:12.115372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.115395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.115565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.115576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.115831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.115890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.116104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.116140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.116346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.116376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.116556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.116566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.116660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.116670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.116881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.116891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.116979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.116989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.117240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.117273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.117455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.117485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.117770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.117801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.117979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.118009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.118274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.118470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.118622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.118660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.118950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.118980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.119173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.119205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.119340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.119372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.119507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.119542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.119605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.119615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.119777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.119787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.119992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.120002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.120191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.120223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.120385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.120395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.120526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.120535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.120683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.120693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.120857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.120889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.121064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.121101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.121280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.121313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.121493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.121503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.121585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.121595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.121673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.121683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.121863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.121896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.122185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.122218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.122411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.122442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.122664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.122674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.122827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.742 [2024-12-16 16:42:12.122837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.742 qpair failed and we were unable to recover it. 00:36:23.742 [2024-12-16 16:42:12.122984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.122994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.123226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.123237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.123362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.123372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.123447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.123457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.123662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.123683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.123890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.123909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.124157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.124170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.124313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.124323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.124482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.124492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.124586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.124596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.124740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.124773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.125060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.125092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.125328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.125360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.125556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.125588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.125788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.125826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.125968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.125981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.126127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.126141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.126277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.126291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.126377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.126391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.126622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.126654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.126858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.126891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.127108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.127141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.127275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.127307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.127448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.127479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.127660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.127674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.127805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.127819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.127961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.127975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.128208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.128222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.128381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.128394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.128466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.128479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.128681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.128693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.128845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.128877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.129173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.129206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.129334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.129520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.129534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.129745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.129776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.129956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.129988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.130252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.743 [2024-12-16 16:42:12.130285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.743 qpair failed and we were unable to recover it. 00:36:23.743 [2024-12-16 16:42:12.130520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.130533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.130677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.130709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.130896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.130927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.131116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.131149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.131325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.131356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.131574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.131862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.131877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.132080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.132105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.132275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.132288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.132489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.132502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.132702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.132714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.132982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.132996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.133128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.133317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.133330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.133471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.133484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.133698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.133729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.133984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.134015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.134305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.134339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.134526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.134557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.134838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.134856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.135071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.135090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.135237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.135255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.135363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.135380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.135630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.135650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.135750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.135767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.135991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.136008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.136102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.136121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.136358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.136390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.136608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.136779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.136822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.137061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.137079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.137253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.137271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.137423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.137441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.137600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.137632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.137896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.137928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.138112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.138145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.138383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.138415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.138655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.138686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.744 qpair failed and we were unable to recover it. 00:36:23.744 [2024-12-16 16:42:12.138945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.744 [2024-12-16 16:42:12.138963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.139106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.139125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.139228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.139246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.139475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.139493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.139760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.139779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.139922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.139939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.140044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.140062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.140228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.140247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.140468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.140491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.140682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.140712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.140975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.141006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.141197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.141230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.141425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.141456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.141664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.141695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.141902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.141920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.142080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.142117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.142307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.142325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.142558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.142590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.142757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.142789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.142977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.143008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.143285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.143319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.143513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.143532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.143620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.143638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.143855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.143873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.144113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.144133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.144388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.144412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.144571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.144595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.144927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.144958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.145218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.745 [2024-12-16 16:42:12.145251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.745 qpair failed and we were unable to recover it. 00:36:23.745 [2024-12-16 16:42:12.145433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.145464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.145607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.145632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.145892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.145923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.146183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.146216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.146481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.146505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.146739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.146764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.147029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.147054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.147236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.147261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.147423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.147448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.147639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.147671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.147866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.147896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.148144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.148177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.148458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.148489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.148659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.148683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.148934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.148967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.149216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.149249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.149502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.149535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.149713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.149744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.150028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.150060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.150356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.150395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.150658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.150701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.151070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.151103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.151206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.151231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.151455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.151480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.151661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.151685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.151906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.151931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.152110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.152135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.152300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.152325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.152568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.152850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.152874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.153028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.153052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.153248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.153274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.153463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.153495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.153683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.153714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.153998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.154030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.154218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.154250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.154390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.154421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.154547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.746 [2024-12-16 16:42:12.154579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.746 qpair failed and we were unable to recover it. 00:36:23.746 [2024-12-16 16:42:12.154838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.154869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.155049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.155080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.155284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.155317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.155556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.155587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.155723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.155754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.155870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.155901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.156082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.156121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.156318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.156368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.156605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.156637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.156903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.156934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.157175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.157207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.157481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.157513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.157712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.157743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.157933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.157964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.158232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.158265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.158392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.158424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.158548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.158579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.158840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.158871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.159176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.159208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.159414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.159445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.159627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.159664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.159901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.159932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.160110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.160143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.160330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.160361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.160627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.160659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.160898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.160929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.161108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.161141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.161324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.161355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.161607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.161638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.161810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.161841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.162075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.162116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.162372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.162402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.162586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.162617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.162807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.162838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.163089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.163132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.163422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.163714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.163745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.164048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.164079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.164361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.747 [2024-12-16 16:42:12.164394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.747 qpair failed and we were unable to recover it. 00:36:23.747 [2024-12-16 16:42:12.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.164695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.164904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.164936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.165073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.165116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.165356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.165388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.165652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.165683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.165892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.165925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.166070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.166123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.166363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.166394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.166599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.166631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.166871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.166903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.167075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.167118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.167291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.167323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.167491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.167522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.167767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.167799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.168054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.168085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.168338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.168370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.168634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.168665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.168954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.168986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.169247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.169281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.169528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.169560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.169808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.169840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.170053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.170090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.170295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.170327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.170574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.170605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.170796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.170828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.171038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.171070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.171269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.171302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.171557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.171588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.171774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.171806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.172090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.172132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.172327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.172359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.172625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.172656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.172969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.173238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.173271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.173545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.173576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.173902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.173935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.174114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.174146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.174388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.174420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.748 [2024-12-16 16:42:12.174681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.748 [2024-12-16 16:42:12.174713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.748 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.174896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.174927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.175221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.175253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.175494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.175525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.175717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.175748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.175986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.176018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.176158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.176189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.176364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.176396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.176569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.176600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.176803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.176834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.177079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.177123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.177248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.177278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.177545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.177576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.177680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.177712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.177950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.177982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.178170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.178203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.178396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.178427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.178602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.178634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.178809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.178839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.179054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.179086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.179269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.179301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.179475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.179506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.179693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.179724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.179998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.180035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.180307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.180340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.180620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.180652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.180939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.180970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.181244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.181277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.181492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.181524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.181645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.181676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.181861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.181892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.182168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.182202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.182396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.182428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.182641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.182672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.182860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.182891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.183064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.183345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.183377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.749 [2024-12-16 16:42:12.183623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.749 [2024-12-16 16:42:12.183654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.749 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.183914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.183946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.184136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.184169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.184387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.184418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.184612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.184643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.184747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.184778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.185014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.185045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.185294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.185326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.185499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.185531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.185720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.185751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.185947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.185979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.186169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.186202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.186406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.186437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.186708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.186740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.186914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.186946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.187140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.187173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.187354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.187385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.187630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.187661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.187844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.187875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.188105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.188137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.188378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.188589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.188621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.188805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.188836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.189080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.189133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.189248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.189280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.189460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.189492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.189666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.189949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.189982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.190276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.190309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.190500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.190531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.190801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.190833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.190951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.190982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.191174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.191207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.191404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.191435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.750 [2024-12-16 16:42:12.191638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.750 [2024-12-16 16:42:12.191669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.750 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.191935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.191966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.192224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.192258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.192433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.192464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.192757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.192789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.192913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.192945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.193232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.193264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.193505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.193538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.193651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.193682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.193945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.193977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.194176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.194209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.194465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.194496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.194685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.194716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.194901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.194932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.195113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.195147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.195410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.195442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.195574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.195605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.195789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.195821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.196024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.196055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.196350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.196383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.196610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.196642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.196847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.196879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.197031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.197063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.197341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.197374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.197640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.197671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.197819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.197850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.198046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.198077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.198294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.198327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.198514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.198545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.198720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.198752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.198949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.198981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.199246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.199278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.199453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.199490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.199699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.199731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.199948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.199979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.200223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.200255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.200447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.200479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.200602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.200633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.200907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.200938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.201134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.751 [2024-12-16 16:42:12.201166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.751 qpair failed and we were unable to recover it. 00:36:23.751 [2024-12-16 16:42:12.201359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.201392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.201656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.201688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.201793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.201824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.202011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.202043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.202253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.202285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.202568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.202599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.202858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.202891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.203139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.203173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.203466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.203497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.203782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.203815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.204026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.204057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.204332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.204365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.204582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.204614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.204912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.204943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.205218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.205538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.205569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.205834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.205866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.206115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.206149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.206264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.206295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.206645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.206719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.206978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.207015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.207214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.207250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.207524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.207870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.207902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.208152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.208185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.208384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.208415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.208530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.208562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.208823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.208854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.209048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.209080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.209355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.209387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.209629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.209661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.209951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.209983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.210252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.210285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.210509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.210542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.210761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.211033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.211065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.211340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.211373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.211656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.211688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.211953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.752 [2024-12-16 16:42:12.211983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.752 qpair failed and we were unable to recover it. 00:36:23.752 [2024-12-16 16:42:12.212176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.212209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.212396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.212427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.212620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.212651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.212848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.212879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.213120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.213152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.213364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.213396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.213588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.213871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.213909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.214116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.214148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.214414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.214445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.214704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.214735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.214935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.214966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.215155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.215186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.215462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.215657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.215688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.215860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.215891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.216109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.216143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.216346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.216378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.216554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.216585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.216855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.216886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.217064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.217111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.217322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.217354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.217597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.217891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.217923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.218143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.218176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.218323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.218354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.218604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.218636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.218827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.218858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.219053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.219084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.219286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.753 [2024-12-16 16:42:12.219319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.753 qpair failed and we were unable to recover it. 00:36:23.753 [2024-12-16 16:42:12.219496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.219527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.219723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.219755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.220026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.220056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.220309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.220341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.220544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.220581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.220856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.220887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.221179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.221212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.221483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.221514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.221731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.221762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.222057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.222088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.222379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.222411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.222683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.222713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.223004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.223036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.223254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.223287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.223463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.223493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.223680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.223711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.223908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.223940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.224130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.224162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.224367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.224399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.224595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.224626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.224845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.224876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.225152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.225184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.225371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.225402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.225579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.225610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.225734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.225765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.226031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.226062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.226265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.226299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.226553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.226584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.226833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.226865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.227086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.227128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.227325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.227357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.227584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.227837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.227869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.228163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.228197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.754 qpair failed and we were unable to recover it. 00:36:23.754 [2024-12-16 16:42:12.228327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.754 [2024-12-16 16:42:12.228359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.228637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.228668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.228878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.228909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.229087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.229138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.229333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.229365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.229547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.229580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.229764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.229794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.230039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.230071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.230347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.230381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.230523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.230553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.230731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.230763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.230949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.230987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.231183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.231215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.231414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.231446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.231696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.231727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.231871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.231901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.232144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.232177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.232475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.232506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.232712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.232744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.233053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.233360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.233394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.233590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.233620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.233878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.233910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.234039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.234071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.234557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.234591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.234874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.234906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.235107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.235139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.235390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.235421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.235637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.235667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.235885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.235917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.236090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.236134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.236383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.236523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.236768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.236799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.236941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.236972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.237127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.237161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.237352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.237385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.237685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.237717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.238004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.238035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.755 [2024-12-16 16:42:12.238251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.755 [2024-12-16 16:42:12.238284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.755 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.238502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.238534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.238831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.238863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.239041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.239072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.239265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.239297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.239565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.239596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.239798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.239830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.240029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.240060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.240268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.240300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.240496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.240527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.240777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.240808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.241007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.241038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.241251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.241284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.241599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.241631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.241775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.241807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.242082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.242126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.242375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.242407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.242656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.242687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.242912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.242943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.243119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.243152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.243366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.243397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.243665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.243696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.243943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.243974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.244248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.244281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.244571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.244601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.244825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.244856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.245075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.245123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.245351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.245384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.245655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.245686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.245891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.245922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.246116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.246149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.246347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.246379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.246518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.246797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.246829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.246961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.246992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.247183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.247216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.247408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.247438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.247622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.247877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.248067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.248107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.756 qpair failed and we were unable to recover it. 00:36:23.756 [2024-12-16 16:42:12.248372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.756 [2024-12-16 16:42:12.248409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.248607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.248638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.248897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.248929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.249199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.249233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.249524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.249554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.249810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.249842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.250043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.250074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.250260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.250292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.250544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.250576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.250824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.250854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.251124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.251156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.251458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.251490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.251754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.251785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.252082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.252123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.252328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.252359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.252559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.252591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.252777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.252807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.253010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.253042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.253257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.253290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.253401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.253432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.253717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.253748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.253926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.253956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.254233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.254267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.254476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.254508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.254777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.254809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.255026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.255057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.255262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.255295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.255595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.255626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.255917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.255949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.256221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.256255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.256473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.256503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.256641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.256672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.256858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.256890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.257145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.257178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.257377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.257409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.257686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.257717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.257936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.257968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.258146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.258179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.258455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.258682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.757 [2024-12-16 16:42:12.258715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.757 qpair failed and we were unable to recover it. 00:36:23.757 [2024-12-16 16:42:12.258965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.258997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.259219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.259252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.259493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.259526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.259705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.259736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.260015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.260047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.260338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.260371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.260575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.260606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.260878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.260910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.261168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.261202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.261423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.261455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.261651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.261682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.261943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.261975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.262155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.262188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.262470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.262502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.262811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.262843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.263128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.263439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.263471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.263738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.263770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.264067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.264181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.264479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.264511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.264810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.264841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.265062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.265104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.265222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.265253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.265547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.265578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.265849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.265880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.266160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.266194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.266410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.266442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.266636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.266667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.266853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.266891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.267090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.267132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.267320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.267351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.267621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.267653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.267980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.268011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.268286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.268319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.268552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.268584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.268834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.268865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.269118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.269151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.269377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.269409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.269609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.269641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.758 [2024-12-16 16:42:12.269918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.758 [2024-12-16 16:42:12.269950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.758 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.270165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.270198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.270451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.270483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.270773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.270805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.271081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.271136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.271408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.271439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.271738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.271770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.272042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.272073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.272369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.272402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.272675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.272707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.272970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.273000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.273193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.273226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.273370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.273402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.273657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.273688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.273984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.274015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.274333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.274367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.274607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.274638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.274906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.274938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.275208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.275240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.275488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.275519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.275821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.275852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.276074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.276116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.276312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.276343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.276538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.276570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.276845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.276876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.277152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.277184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.277478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.277510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.277692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.759 [2024-12-16 16:42:12.277723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.759 qpair failed and we were unable to recover it. 00:36:23.759 [2024-12-16 16:42:12.277927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.277958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.278219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.278253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.278388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.278424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.278678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.278710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.278940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.278972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.279271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.279304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.279488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.279519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.279709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.279741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.279926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.279958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.280242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.280275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.280469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.280501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.280705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.280737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.280915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.280946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.281202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.281235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.281537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.281569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.281852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.281883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.282072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.282115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.282311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.282341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.282546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.282578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.282792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.282823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.283085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.283145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.283424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.283455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.283586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.283617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.283815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.283847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.283998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.284028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.284304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.284337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.284617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.284648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.284933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.284965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.285242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.285275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.285412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.285450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.285752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.285783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.285989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.286021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.286215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.286248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.286523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.286554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.286781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.286812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.287092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.287137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.287327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.287359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.287558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.287590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.760 [2024-12-16 16:42:12.287802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.760 [2024-12-16 16:42:12.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.760 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.288055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.288085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.288278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.288310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.288576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.288609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.288826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.288857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.289127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.289161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.289441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.289473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.289784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.289815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.290073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.290114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.290384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.290415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.290620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.290651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.290778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.290810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.291027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.291058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.291296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.291333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.291608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.291639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.291829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.291864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.292130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.292165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.292391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.292423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.292653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.292684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.292968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.293000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.293277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.293311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.293541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.293571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.293873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.293904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.294176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.294208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.294423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.294453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.294677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.294939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.294970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.295220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.295252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.295552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.295582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.295851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.295882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.296087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.296132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.296377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.296409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.296590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.296627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.296827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.296857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.296990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.297023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.761 qpair failed and we were unable to recover it. 00:36:23.761 [2024-12-16 16:42:12.297298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.761 [2024-12-16 16:42:12.297331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.297656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.297688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.297944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.297976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.298238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.298269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.298534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.298565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.298842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.298873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.299162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.299195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.299477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.299508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.299707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.299738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.299986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.300273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.300305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.300538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.300571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.300826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.300857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.301050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.301081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.301293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.301325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.301511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.301542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.301740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.301771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.301963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.301995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.302234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.302268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.302468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.302499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.302692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.302722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.302920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.302951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.303181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.303213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.303514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.303547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.303765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.303803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.304001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.304031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.304264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.304296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.304494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.304526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.304812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.304845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.304955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.304985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.305116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.305149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.305447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.305479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.305765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.305796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.306081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.306126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.306338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.306369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.306597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.306628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.306829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.762 [2024-12-16 16:42:12.306860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.762 qpair failed and we were unable to recover it. 00:36:23.762 [2024-12-16 16:42:12.307159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.307192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.307497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.307715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.307747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.307999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.308030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.308170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.308204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.308406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.308438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.308642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.308675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.308893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.308924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.309178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.309211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.309406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.309438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.309639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.309671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.309952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.309985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.310246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.310283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.310395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.310427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.310575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.310605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.310871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.310903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.311185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.311218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.311361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.311392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.311644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.311677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.311962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.311994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.312193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.312224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.312476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.312507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.312689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.312910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.312941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.313216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.313248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.313433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.313464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.313671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.313701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.313852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.313883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.314158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.314197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.314497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.314528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.314792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.314824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.314954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.314985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.315161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.315194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.315326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.315357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.315558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.315588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.315942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.315973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.316235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.316267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.316495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.316526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.316673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.763 [2024-12-16 16:42:12.316704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.763 qpair failed and we were unable to recover it. 00:36:23.763 [2024-12-16 16:42:12.316887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.316918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.317107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.317140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.317333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.317364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.317641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.317672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.317857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.317889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.318085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.318125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.318307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.318339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.318530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.318561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.318831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.318862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.319064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.319125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.319352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.319383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.319583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.319615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.319807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.319838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.320119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.320152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.320376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.320407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.320656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.320689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.320893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.320924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.321126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.321159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.321411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.321443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.321720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.321751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.321953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.321985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.322192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.322224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.322409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.322439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.322652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.322684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.322963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.322994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.323222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.323255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.323450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.323482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.323734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.323765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.323947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.323978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.324160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.324192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.764 [2024-12-16 16:42:12.324407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.764 [2024-12-16 16:42:12.324484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:23.764 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.324718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.324754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.325027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.325060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.325364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.325401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.325608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.325639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.325906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.325939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.326085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.326130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.326285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.326317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.326542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.326574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.326769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.326799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.327000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.327031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.327326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.327358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.327630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.327662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.327886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.328219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.328253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.328408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.328440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.328650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.328680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.328900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.328932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.329233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.329266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.329444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.329476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.329616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.329647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.329971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.330003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.330258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.330291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.330566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.330597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.330852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.330884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.331164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.331198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.331393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.331425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.331609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.331646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.331840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.331872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.332151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.332184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.332387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.332418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.332620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.332651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.332850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.332883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.333136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.333169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:23.765 [2024-12-16 16:42:12.333476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.765 [2024-12-16 16:42:12.333507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:23.765 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.333794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.333826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.334138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.334170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.334411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.334593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.334624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.334834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.334865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.335053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.335085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.335308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.335341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.335522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.335559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.335849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.335881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.336082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.336129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.336352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.336384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.336517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.336548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.336815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.336847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.337124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.337157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.047 [2024-12-16 16:42:12.337406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.047 [2024-12-16 16:42:12.337437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.047 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.337643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.337675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.337807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.337839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.338067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.338233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.338265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.338469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.338507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.338758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.338790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.338983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.339015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.339191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.339224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.339370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.339401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.339585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.339615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.339818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.339850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.340126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.340159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.340415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.340447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.340720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.340754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.341033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.341066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.341352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.341385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.341664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.341696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.341911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.341943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.342158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.342194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.342317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.342348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.342494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.342525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.342783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.342815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.342954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.342985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.343166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.343199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.343377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.343409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.343662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.343694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.343834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.343865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.344002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.344033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.344257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.344288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.048 [2024-12-16 16:42:12.344489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.048 [2024-12-16 16:42:12.344519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.048 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.344708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.344740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.344938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.345166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.345199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.345380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.345411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.345638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.345670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.345862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.345893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.346016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.346047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.346390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.346425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.346631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.346663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.346790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.346823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.347015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.347048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.347205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.347238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.347422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.347454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.347667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.347923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.347955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.348248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.348288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.348480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.348512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.348697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.348728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.349000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.349032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.349306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.349341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.349472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.349503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.349749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.349781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.349909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.349942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.350138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.350171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.350314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.350346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.350451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.350483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.350685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.350717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.351044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.049 [2024-12-16 16:42:12.351076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.049 qpair failed and we were unable to recover it. 00:36:24.049 [2024-12-16 16:42:12.351341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.351375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.351576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.351608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.351802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.351833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.352054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.352087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.352316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.352348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.352547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.352579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.352785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.352818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.353013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.353045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.353197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.353230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.353414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.353447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.353629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.353661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.353867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.353898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.354051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.354082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.354233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.354265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.354512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.354545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.354751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.354784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.354894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.354925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.355125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.355158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.355282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.355314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.355494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.355525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.355720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.355753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.355950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.355982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.356178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.356210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.356341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.356372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.356652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.356684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.356906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.356938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.357121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.357155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.357438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.357469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.357601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.357634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.357885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.050 [2024-12-16 16:42:12.357917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.050 qpair failed and we were unable to recover it. 00:36:24.050 [2024-12-16 16:42:12.358111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.358144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.358339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.358371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.358620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.358652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.358911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.358943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.359152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.359185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.359465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.359498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.359702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.359734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.359872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.359903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.360123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.360157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.360284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.360314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.360438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.360470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.360602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.360633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.360916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.360947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.361151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.361183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.361362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.361393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.361575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.361606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.361857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.361890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.362142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.362196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.362385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.362416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.362611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.362644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.362845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.362877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.363004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.363035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.363252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.363285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.363479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.363510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.363645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.363676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.363897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.363936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.364119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.364153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.364292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.364323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.051 qpair failed and we were unable to recover it. 00:36:24.051 [2024-12-16 16:42:12.364513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.051 [2024-12-16 16:42:12.364544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.364728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.364760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.365019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.365051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.365309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.365343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.365523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.365555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.365743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.365776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.365972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.366005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.366285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.366319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.366431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.366463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.366678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.366710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.366877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.367180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.367214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.367422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.367454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.367578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.367611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.367907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.368172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.368206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.368395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.368699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.368730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.368914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.368946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.369167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.369200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.369404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.369436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.369631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.369662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.369951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.369983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.370183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.370216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.370355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.370388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.370585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.370616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.370727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.370759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.370886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.370918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.371047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.371077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.371363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.371396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.371589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.371620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.371745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.371775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.371978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.372017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.052 qpair failed and we were unable to recover it. 00:36:24.052 [2024-12-16 16:42:12.372153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.052 [2024-12-16 16:42:12.372188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.372373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.372405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.372524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.372555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.372800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.372833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.372950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.372982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.373171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.373213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.373440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.373472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.373656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.373686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.373904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.373935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.374130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.374162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.374309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.374343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.374460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.374496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.374670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.374701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.374991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.375023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.375159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.375190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.375303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.375333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.375526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.375577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.375850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.375883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.376438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.376476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.376722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.376758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.376890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.376922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.377129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.053 [2024-12-16 16:42:12.377163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.053 qpair failed and we were unable to recover it. 00:36:24.053 [2024-12-16 16:42:12.377350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.377381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.377629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.377660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.377789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.377821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.378281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.378312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.378420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.378451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.378725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.378756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.378962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.378997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.379200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.379235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.379480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.379511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.379635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.379674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.379854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.379886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.380038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.380070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.380221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.380254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.380522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.380554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.380693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.380727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.380935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.380966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.381163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.381198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.381343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.381382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.381578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.381611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.381786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.381817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.382070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.382117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.382317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.382351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.382490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.382527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.382733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.382971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.383003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.383223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.383257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.383507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.383540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.383739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.383771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.383895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.383926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.384110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.384143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.384319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.384371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.384545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.384577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.054 [2024-12-16 16:42:12.384702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.054 [2024-12-16 16:42:12.384732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.054 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.384939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.385122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.385156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.385284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.385317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.385580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.385612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.385816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.385848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.385954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.385987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.386092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.386136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.386327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.386359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.386562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.386594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.386839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.386871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.387000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.387031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.387218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.387252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.387531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.387726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.387758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.387893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.387924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.388038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.388070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.388204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.388236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.388381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.388419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.388620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.388650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.388915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.388946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.389075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.389137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.389411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.389443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.389623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.389652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.389934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.389967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.390273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.390307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.390433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.390466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.390592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.390810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.055 [2024-12-16 16:42:12.390841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.055 qpair failed and we were unable to recover it. 00:36:24.055 [2024-12-16 16:42:12.390960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.390991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.391186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.391218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.391403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.391433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.391622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.391654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.391848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.391880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.392056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.392088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.392223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.392254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.392458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.392489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.392668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.392701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.392891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.392921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.393116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.393147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.393349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.393379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.393625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.393656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.393928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.393960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.394272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.394305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.394424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.394454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.394705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.394743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.394994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.395025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.395271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.395304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.395519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.395551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.395768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.395801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.396013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.396044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.396276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.396308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.396502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.396534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.396680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.396710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.396920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.396952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.397092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.397146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.397266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.397298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.397497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.397528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.397730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.397760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.397880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.398117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.398149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.398372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.398404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.398525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.056 [2024-12-16 16:42:12.398557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.056 qpair failed and we were unable to recover it. 00:36:24.056 [2024-12-16 16:42:12.398823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.398855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.399028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.399058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.399269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.399302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.399549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.399580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.399767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.399798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.399972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.400297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.400330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.400449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.400480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.400651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.400681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.400867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.400898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.401115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.401149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.401269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.401301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.401418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.401448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.401571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.401601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.401792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.401823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.402004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.402037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.402154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.402186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.402311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.402342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.402607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.402638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.402839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.402869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.402977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.403007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.403138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.403171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.403416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.403448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.403655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.403693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.403917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.403949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.404188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.404223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.404402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.404432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.404685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.404718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.404989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.405020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.405268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.405301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.405544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.405576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.405822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.405855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.406065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.406105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.406290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.406321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.406511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.406543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.406740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.406771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.406975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.407005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.407219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.407251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.407461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.407491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.407671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.057 [2024-12-16 16:42:12.407701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.057 qpair failed and we were unable to recover it. 00:36:24.057 [2024-12-16 16:42:12.407943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.407975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.408220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.408254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.408447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.408478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.408665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.408697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.408812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.408843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.409035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.409065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.409229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.409260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.409393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.409425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.409600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.409631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.409897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.409928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.410110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.410149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.410421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.410452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.410640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.410672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.410808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.410839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.411079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.411121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.411316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.411347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.411472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.411502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.411678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.411710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.411901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.411932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.412121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.412153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.412284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.412315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.412426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.412459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.412580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.412610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.412834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.412866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.412997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.413028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.413201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.413234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.413524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.413555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.413743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.413774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.413946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.413979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.414112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.414144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.414317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.414349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.414563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.414593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.414768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.414800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.058 [2024-12-16 16:42:12.414981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.058 [2024-12-16 16:42:12.415012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.058 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.415276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.415309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.415562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.415593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.415762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.415793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.415979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.416010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.416213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.416264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.416447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.416479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.416652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.416682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.416852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.416884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.417059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.417090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.417352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.417383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.417592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.417623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.417798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.417830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.418116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.418148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.418342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.418374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.418504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.418537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.418672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.418702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.418944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.418976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.419217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.419261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.419503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.419534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.419766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.419796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.420037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.420068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.420323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.420354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.420472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.420503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.420698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.420730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.420845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.420876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.421123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.421157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.421421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.421451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.421634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.421664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.421847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.421878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.422172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.422205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.422321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.422352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.422559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.422590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.422712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.422742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.422935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.422966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.423162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.423196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.423333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.423364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.423536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.423567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.423805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.423837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.423941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.423971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.059 [2024-12-16 16:42:12.424216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.059 [2024-12-16 16:42:12.424250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.059 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.424383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.424414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.424554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.424585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.424705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.424735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.424863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.424895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.425085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.425127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.425324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.425354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.425616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.425648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.425832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.425862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.426034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.426064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.426348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.426381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.426519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.426550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.426666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.426696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.426866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.426896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.427020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.427052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.427248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.427280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.427405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.427435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.427547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.427577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.427765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.427796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.427969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.428001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.428209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.428242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.428427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.428457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.428674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.428704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.428892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.428923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.429210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.429243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.429502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.429533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.429705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.429736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.429854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.429884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.430133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.430167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.430433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.430464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.430674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.430705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.430837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.430866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.431051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.431081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.431227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.431258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.431429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.431459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.431642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.431672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.431793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.431823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.431944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.431973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.432112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.432144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.432320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.060 [2024-12-16 16:42:12.432351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.060 qpair failed and we were unable to recover it. 00:36:24.060 [2024-12-16 16:42:12.432467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.432499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.432670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.432700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.432870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.432900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.433010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.433042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.433162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.433193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.433379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.433410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.433687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.433724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.434011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.434173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.434206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.434380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.434410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.434576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.434606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.434747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.434777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.434993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.435024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.435216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.435249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.435489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.435748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.435782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.435973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.436005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.436191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.436225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.436396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.436426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.436558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.436589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.436836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.436867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.437040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.437072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.437276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.437308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.437480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.437511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.437688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.437719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.437902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.437934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.438128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.438160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.438343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.438374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.438564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.438596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.438766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.438797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.438912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.438943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.439183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.439216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.439482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.439513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.439624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.439656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.439777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.439807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.440026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.440270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.440302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.440476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.440508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.440635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.440666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.440908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.061 [2024-12-16 16:42:12.440939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.061 qpair failed and we were unable to recover it. 00:36:24.061 [2024-12-16 16:42:12.441060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.441090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.441285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.441317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.441437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.441468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.441650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.441683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.441943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.441973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.442076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.442117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.442247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.442276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.442405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.442435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.442606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.442876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.442906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.443028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.443057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.443257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.443290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.443407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.443437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.443664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.443694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.443944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.443973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.444108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.444140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.444259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.444291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.444411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.444443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.444547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.444577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.444791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.444823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.445008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.445040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.445172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.445204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.445399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.445429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.445569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.445600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.445798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.445829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.446071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.446116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.446238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.446286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.446419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.446450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.446635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.446664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.446852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.446882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.447144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.447178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.447312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.447342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.447607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.447639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.447901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.447932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.448200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.448239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.448343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.448373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.448589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.448621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.448814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.448844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.448946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.062 [2024-12-16 16:42:12.448976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.062 qpair failed and we were unable to recover it. 00:36:24.062 [2024-12-16 16:42:12.449236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.449269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.449373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.449403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.449616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.449648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.449891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.449922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.450184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.450217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.450387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.450417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.450525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.450555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.450740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.450771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.451012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.451044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.451204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.451236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.451484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.451515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.451756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.451787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.452068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.452110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.452299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.452330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.452504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.452535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.452716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.452747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.452918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.452948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.453140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.453173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.453348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.453378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.453627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.453659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.453919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.453949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.454136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.454168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.454397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.454429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.454676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.454708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.454886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.454918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.455108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.455140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.455346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.455378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.455560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.455591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.455770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.455801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.456013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.456050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.456240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.456271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.456480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.456511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.456800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.456831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.457042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.063 [2024-12-16 16:42:12.457075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.063 qpair failed and we were unable to recover it. 00:36:24.063 [2024-12-16 16:42:12.457327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.457359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.457517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.457625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.457662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.457946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.457979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.458247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.458280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.458408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.458438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.458644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.458674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.458864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.458894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.459123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.459156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.459345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.459376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.459640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.459672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.459805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.459837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.460108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.460141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.460265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.460297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.460487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.460517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.460686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.460716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.460917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.460947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.461191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.461224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.461392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.461423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.461593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.461624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.461828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.461859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.462044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.462076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.462193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.462224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.462349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.462380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.462572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.462603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.462848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.462879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.463064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.463115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.463236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.463267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.463454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.463486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.463653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.463846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.464058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.464088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.464222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.464253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.464464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.464496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.464811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.464842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.464960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.464991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.465126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.465159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.465284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.465314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.465551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.465583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.064 qpair failed and we were unable to recover it. 00:36:24.064 [2024-12-16 16:42:12.465773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.064 [2024-12-16 16:42:12.465805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.465974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.466005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.466186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.466219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.466433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.466464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.466709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.466739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.466919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.466949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.467088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.467240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.467271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.467472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.467501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.467742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.467772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.467975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.468005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.468175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.468207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.468374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.468407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.468571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.468771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.468802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.468980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.469009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.469247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.469279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.469447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.469478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.469606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.469638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.469772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.469803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.469986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.470016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.470194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.470228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.470333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.470364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.470535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.470566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.470704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.470735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.470850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.470882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.471014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.471044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.471249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.471280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.471463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.471494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.471732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.471763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.471890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.471921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.472090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.472139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.472316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.472347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.472528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.472558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.472733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.472764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.472924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.473112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.473144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.473265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.473296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.473539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.473571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.473737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.065 [2024-12-16 16:42:12.473768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.065 qpair failed and we were unable to recover it. 00:36:24.065 [2024-12-16 16:42:12.473979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.474009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.474237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.474270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.474444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.474476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.474712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.474743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.474946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.474977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.475131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.475163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.475433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.475682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.475868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.475898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.476019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.476049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.476237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.476270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.476390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.476420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.476523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.476555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.476841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.476872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.477008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.477040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.477184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.477215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.477350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.477382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.477645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.477676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.477841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.477878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.478117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.478150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.478255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.478286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.478456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.478486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.478679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.478711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.478885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.478917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.479107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.479139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.479395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.479427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.479541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.479571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.479805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.479835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.480006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.480037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.480221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.480253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.480451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.480483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.480653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.480684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.480869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.480902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.481005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.481035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.481230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.481262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.481441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.481472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.481654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.481684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.481937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.481968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.482138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.482170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.066 qpair failed and we were unable to recover it. 00:36:24.066 [2024-12-16 16:42:12.482356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.066 [2024-12-16 16:42:12.482387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.482508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.482538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.482718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.482751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.482884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.482914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.483082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.483123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.483315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.483346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.483547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.483579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.483705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.483734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.483907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.483937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.484137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.484169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.484341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.484373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.484580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.484612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.484791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.484822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.485081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.485121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.485382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.485412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.485608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.485639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.485884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.485914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.486115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.486147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.486404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.486434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.486601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.486631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.486819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.486857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.487073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.487121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.487245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.487276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.487468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.487498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.487624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.487655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.487912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.487942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.488126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.488159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.488337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.488367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.488489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.488521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.488730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.488761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.488885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.488915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.067 qpair failed and we were unable to recover it. 00:36:24.067 [2024-12-16 16:42:12.489109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.067 [2024-12-16 16:42:12.489142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.489258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.489288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.489399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.489428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.489716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.489747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.489889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.489920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.490116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.490149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.490408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.490438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.490619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.490647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.490821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.490850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.491032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.491062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.491305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.491335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.491450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.491477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.491658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.491686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.491942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.491971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.492255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.492287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.492409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.492437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.492561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.492595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.492713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.492742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.492996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.493025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.493150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.493182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.493446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.493474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.493605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.493633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.493889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.493918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.494107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.494137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.494402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.494431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.494567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.494595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.494783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.494811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.494980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.495010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.495138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.495170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.495407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.495436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.495622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.495651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.495835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.495864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.496045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.496076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.496255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.496285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.496401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.496430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.496536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.496566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.496823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.496855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.497040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.497071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.497271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.497304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.497408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.068 [2024-12-16 16:42:12.497438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.068 qpair failed and we were unable to recover it. 00:36:24.068 [2024-12-16 16:42:12.497714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.497745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.497942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.498149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.498181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.498298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.498328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.498508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.498539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.498650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.498681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.498796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.498827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.499004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.499035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.499217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.499250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.499437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.499468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.499658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.499689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.499868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.499900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.500004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.500034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.500172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.500206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.500457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.500489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.500659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.500691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.500862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.500892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.501064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.501111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.501231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.501262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.501378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.501408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.501594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.501624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.501827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.501858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.501962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.501993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.502230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.502262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.502445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.502476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.502599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.502629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.502858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.502889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.503070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.503120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.503248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.503279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.503529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.503560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.503806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.503837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.503971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.504001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.504123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.504155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.504354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.504387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.504567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.504598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.504705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.504735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.504844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.504874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.505045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.505077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.505255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.069 [2024-12-16 16:42:12.505286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.069 qpair failed and we were unable to recover it. 00:36:24.069 [2024-12-16 16:42:12.505417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.505447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.505629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.505660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.505795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.505827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.505929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.505959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.506076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.506115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.506320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.506351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.506479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.506510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.506626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.506658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.506829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.506861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.507046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.507077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.507250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.507491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.507522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.507650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.507681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.507865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.507897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.508012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.508042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.508180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.508386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.508417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.508611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.508642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.508890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.508921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.509227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.509261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.509476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.509508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.509775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.509806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.509978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.510009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.510249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.510282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.510468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.510598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.510628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.510815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.510847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.511017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.511048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.511241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.511273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.511409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.511439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.511620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.511650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.511886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.511917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.512028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.512059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.512314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.512347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.512527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.512558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.512684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.512715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.512901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.512933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.513119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.513153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.513338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.513374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.070 [2024-12-16 16:42:12.513586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.070 [2024-12-16 16:42:12.513617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.070 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.513786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.513818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.513988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.514185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.514217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.514342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.514373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.514541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.514573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.514684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.514715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.514900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.514936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.515061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.515092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.515275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.515492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.515523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.515630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.515661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.515794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.515824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.516086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.516128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.516320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.516352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.516544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.516576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.516769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.516800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.517013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.517044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.517249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.517282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.517502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.517533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.517647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.517678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.517952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.517985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.518250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.518284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.518405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.518436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.518698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.518737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.518925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.518957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.519138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.519170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.519351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.519382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.519638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.519669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.519857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.519887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.520103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.520136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.520350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.520566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.520598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.520779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.520810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.520941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.520972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.521088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.521129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.521394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.521425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.521666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.521698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.521806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.521837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.522047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.071 [2024-12-16 16:42:12.522078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.071 qpair failed and we were unable to recover it. 00:36:24.071 [2024-12-16 16:42:12.522191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.522223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.522465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.522496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.522735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.522767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.522948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.522980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.523239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.523271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.523453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.523485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.523683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.523714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.523922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.523953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.524141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.524180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.524296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.524327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.524524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.524555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.524680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.524712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.524820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.524850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.525114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.525146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.525323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.525354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.525543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.525575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.525688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.525719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.525982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.526014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.526212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.526244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.526362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.526393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.526592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.526623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.526812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.526843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.527056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.527088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.527273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.527304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.527476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.527507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.527695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.527727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.527932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.527962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.528074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.528116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.528299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.528330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.528566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.528597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.528701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.528731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.528981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.529012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.529201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.529235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.529408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.529439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.072 qpair failed and we were unable to recover it. 00:36:24.072 [2024-12-16 16:42:12.529706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.072 [2024-12-16 16:42:12.529737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.529919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.529956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.530226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.530259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.530442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.530473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.530668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.530699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.530954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.530985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.531123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.531155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.531353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.531385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.531564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.531594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.531805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.532033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.532064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.532253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.532286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.532416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.532446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.532567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.532597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.532834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.532865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.533038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.533070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.533195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.533226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.533344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.533374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.533488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.533519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.533690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.533720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.533922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.533953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.534142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.534175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.534303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.534334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.534577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.534609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.534725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.534955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.534986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.535125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.535158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.535335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.535366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.535551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.535583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.535772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.535805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.536041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.536072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.536268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.536402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.536434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.536675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.536707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.536966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.536998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.537171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.537202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.537333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.537364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.537617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.537648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.537909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.537940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.538159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.073 [2024-12-16 16:42:12.538191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.073 qpair failed and we were unable to recover it. 00:36:24.073 [2024-12-16 16:42:12.538292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.538514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.538546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.538723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.538761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.538889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.538919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.539118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.539148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.539320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.539351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.539611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.539643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.539810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.539842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.540019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.540050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.540175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.540206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.540393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.540423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.540546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.540577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.540816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.540847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.541111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.541142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.541261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.541293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.541477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.541508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.541775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.541807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.542000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.542030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.542295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.542329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.542448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.542478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.542595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.542626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.542803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.542834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.543028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.543059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.543253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.543286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.543491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.543523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.543694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.543725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.543983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.544014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.544215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.544248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.544354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.544384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.544571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.544608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.544784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.544815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.545029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.545062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.545307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.545339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.545523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.545554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.545721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.545752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.545932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.545963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.546086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.546128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.546250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.546282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.546546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.546578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.546785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.074 [2024-12-16 16:42:12.546816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.074 qpair failed and we were unable to recover it. 00:36:24.074 [2024-12-16 16:42:12.546928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.546959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.547136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.547169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.547337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.547370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.547563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.547595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.547768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.547799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.547979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.548011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.548197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.548229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.548512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.548545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.548674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.548823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.548854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.549090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.549130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.549251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.549282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.549445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.549562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.549593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.549844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.549875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.550060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.550214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.550246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.550498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.550529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.550718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.550748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.550849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.550880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.551138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.551172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.551380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.551413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.551596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.551628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.551800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.551831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.552038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.552069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.552254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.552286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.552471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.552503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.552782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.552814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.552929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.552960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.553088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.553129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.553265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.553302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.553542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.553574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.553693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.553725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.553900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.553930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.554108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.554139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.554255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.554285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.554496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.554527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.554719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.554751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.554939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.554971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.075 [2024-12-16 16:42:12.555153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.075 [2024-12-16 16:42:12.555194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.075 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.555338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.555378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.555531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.555574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.555765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.555809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.556027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.556072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.556315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.556357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.556551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.556595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.556806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.556854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.557082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.557138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.557425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.557469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.557685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.557732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.558038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.558087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.558375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.558425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.558559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.558604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.558819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.558866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.559132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.559181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.559398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.559444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.559676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.559724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.559981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.560160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.560195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.560387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.560418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.560595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.560626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.560890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.560921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.561177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.561210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.561377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.561409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.561545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.561574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.561745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.561775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.561973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.562004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.562141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.562174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.562352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.562383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.562521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.562552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.562732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.562764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.563008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.563040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.563169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.563201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.563396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.563426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.563541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.076 [2024-12-16 16:42:12.563570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.076 qpair failed and we were unable to recover it. 00:36:24.076 [2024-12-16 16:42:12.563707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.563738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.563858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.563888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.564074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.564124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.564245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.564276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.564387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.564417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.564654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.564684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.564820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.564850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.565088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.565136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.565265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.565297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.565414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.565444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.565649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.565680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.565846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.565877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.566139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.566172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.566277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.566308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.566484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.566514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.566689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.566719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.566848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.566880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.567055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.567086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.567229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.567260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.567441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.567470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.567638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.567669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.567927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.567958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.568069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.568132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.568250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.568286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.568483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.568514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.568724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.568754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.568880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.568911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.569119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.569154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.569397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.569428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.569546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.569576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.569695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.569724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.569850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.569879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.570144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.570198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.570396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.570428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.570545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.570574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.570696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.570727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.570842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.570872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.571164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.571198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.571307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.571338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.571457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.077 [2024-12-16 16:42:12.571488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.077 qpair failed and we were unable to recover it. 00:36:24.077 [2024-12-16 16:42:12.571610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.571642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.571828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.572042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.572072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.572217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.572250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.572368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.572399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.572588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.572617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.572727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.572758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.572885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.573154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.573188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.573426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.573457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.573629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.573659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.573907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.573940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.574120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.574153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.574422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.574454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.574563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.574594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.574762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.574793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.574895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.574925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.575048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.575077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.575263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.575295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.575463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.575494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.575639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.575670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.575846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.575876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.576041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.576297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.576451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.576604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.576735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.576882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.576994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.577025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.577146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.577179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.577365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.577396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.577508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.577538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.577638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.577667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.577850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.577881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.577984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.578014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.578181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.578215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.578487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.578519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.578692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.078 [2024-12-16 16:42:12.578722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.078 qpair failed and we were unable to recover it. 00:36:24.078 [2024-12-16 16:42:12.578898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.578929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.579147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.579181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.579296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.579327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.579505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.579535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.579668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.579700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.579812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.579842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.580013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.580043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.580234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.580267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.580454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.580485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.580590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.580620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.580720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.580943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.580974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.581090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.581131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.581342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.581379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.581555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.581587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.581792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.581823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.581992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.582023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.582194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.582227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.582339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.582370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.582566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.582598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.582787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.582820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.582988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.583019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.583188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.583219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.583408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.583439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.583577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.583608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.583787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.583819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.583949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.583981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.584145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.584178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.584293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.584326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.584563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.584594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.584772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.584804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.585037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.585069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.585321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.585354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.585627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.585658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.585833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.585864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.586072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.586117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.586292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.586322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.586444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.586475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.586598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.586630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.079 qpair failed and we were unable to recover it. 00:36:24.079 [2024-12-16 16:42:12.586801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.079 [2024-12-16 16:42:12.586833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.587052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.587083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.587299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.587330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.587515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.587546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.587737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.587769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.587900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.587932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.588034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.588269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.588300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.588401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.588433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.588551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.588581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.588707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.588739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.588842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.588874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.589137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.589169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.589302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.589331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.589446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.589477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.589602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.589633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.589803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.589833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.589942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.589974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.590156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.590187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.590297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.590329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.590498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.590530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.590707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.590752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.590914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.590958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.591123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.591168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.591364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.591406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.591558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.591602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.591798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.591843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.592053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.592109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.592253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.592296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.592447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.592491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.592639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.592681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.592818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.592861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.593056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.593118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.593393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.593445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.593613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.593657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.593932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.593977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.594251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.594298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.594445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.594488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.594635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.594678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.080 qpair failed and we were unable to recover it. 00:36:24.080 [2024-12-16 16:42:12.594891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.080 [2024-12-16 16:42:12.594938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.595072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.595130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.595273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.595317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.595527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.595583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.595724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.595767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.596042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.596088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.596372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.596419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.596552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.596597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.596800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.596844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.597115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.597163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.597433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.597473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.597672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.597718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.597936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.597980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.598161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.598209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.598365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.598417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.598570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.598616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.598800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.599054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.599110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.599259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.599305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.599446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.599711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.599756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.599897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.599942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.600133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.600181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.600315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.600361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.600574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.600618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.600832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.600878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.601115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.601220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.601250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.601484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.601508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.601603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.601624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.601776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.601798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.601966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.601987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.602091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.602134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.602239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.602261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.081 [2024-12-16 16:42:12.602418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.081 [2024-12-16 16:42:12.602439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.081 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.602533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.602557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.602656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.602824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.602845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.603011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.603033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.603136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.603160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.603315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.603337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.603522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.603544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.603692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.603713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.603862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.603884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.604056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.604253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.604435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.604671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.604888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.604983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.605003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.605109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.605292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.605313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.605493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.605526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.605650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.605681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.605889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.606121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.606335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.606495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.606623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.606815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.606912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.606994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.607107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.607216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.607398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.607578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.607749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.607924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.607945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.608044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.082 [2024-12-16 16:42:12.608064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.082 qpair failed and we were unable to recover it. 00:36:24.082 [2024-12-16 16:42:12.608168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.608191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.608434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.608459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.608667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.608688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.608876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.608897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.608997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.609181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.609290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.609474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.609671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.609795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.609925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.609945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.610038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.610057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.610261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.610283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.610480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.610516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.610641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.610671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.610801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.611088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.611129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.611238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.611268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.611448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.611481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.611651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.611675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.611855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.611885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.612070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.612111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.612225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.612256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.612369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.612399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.612572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.612596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.612869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.612892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.613083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.613119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.613228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.613251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.613426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.613450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.613630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.613654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.613761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.613784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.613962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.613986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.614103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.614128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.614293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.083 [2024-12-16 16:42:12.614316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.083 qpair failed and we were unable to recover it. 00:36:24.083 [2024-12-16 16:42:12.614432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.614456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.614556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.614579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.614728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.614752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.614918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.614942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.615141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.615419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.615444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.615548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.615572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.615667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.615692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.615798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.615826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.615943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.615967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.616058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.616082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.616189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.616214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.616312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.616336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.616531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.616554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.616715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.616739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.616906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.616931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.617047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.617070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.617241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.617265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.617372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.617397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.617482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.617507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.617618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.617643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.617804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.617828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.618923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.618947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.619139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.619253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.619449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.619564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.619695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.619887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.619999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.620028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.620123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.620150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.084 qpair failed and we were unable to recover it. 00:36:24.084 [2024-12-16 16:42:12.620245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.084 [2024-12-16 16:42:12.620270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.620429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.620453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.620551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.620575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.620680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.620702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.620878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.620902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.620993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.621016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.621216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.621245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.621364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.621391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.621619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.621648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.621760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.621787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.621882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.621909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.622956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.622983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.623221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.623251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.623379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.623407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.623506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.623534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.623722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.623750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.623982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.624128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.624275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.624431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.624631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.624838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.624959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.624987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.625150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.625180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.625310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.625337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.085 [2024-12-16 16:42:12.625514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.085 [2024-12-16 16:42:12.625543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.085 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.625656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.625685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.625859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.625886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.625988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.626191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.626326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.626459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.626600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.626742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.626887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.626915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.627076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.627113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.627292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.627320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.627457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.627487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.627742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.627771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.627951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.627978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.628164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.628194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.628303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.628331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.628443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.628470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.628587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.628616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.628712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.628940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.628968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.629147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.629176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.629303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.629332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.629524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.629646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.629674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.629775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.629805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.629965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.629993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.630087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.630131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.630309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.630338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.630515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.630544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.630640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.630667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.630849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.630878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.631051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.631079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.631276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.631305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.631436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.631465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.631633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.631666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.631863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.631892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.632163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.632194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.086 [2024-12-16 16:42:12.632373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.086 [2024-12-16 16:42:12.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.086 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.632519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.632548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.632665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.632694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.632790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.632818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.632914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.632943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.633150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.633255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.633283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.633419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.372 [2024-12-16 16:42:12.633448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.372 qpair failed and we were unable to recover it. 00:36:24.372 [2024-12-16 16:42:12.633558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.633587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.633793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.633822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.633933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.633962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.634073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.634110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.634280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.634309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.634472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.634500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.634672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.634701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.634813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.634841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.634951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.634980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.635086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.635123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.635234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.635262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.635442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.635470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.635590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.635618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.635724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.635752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.635873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.635903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.636144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.636178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.636281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.636311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.636421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.636449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.636633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.636663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.636832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.636860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.636976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.637003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.637115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.637144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.637248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.637276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.637458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.637486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.637673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.637701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.637866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.637893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.638959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.638988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.639083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.639120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.639301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.639329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.639449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.639477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.639637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.639665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.373 [2024-12-16 16:42:12.639873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.373 [2024-12-16 16:42:12.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.373 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.640068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.640123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.640356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.640384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.640503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.640531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.640704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.640732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.640909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.640937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.641057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.641085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.641226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.641349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.641377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.641542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.641571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.641672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.641700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.641818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.641846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.642081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.642122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.642300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.642328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.642434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.642462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.642590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.642618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.642789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.642817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.642946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.642975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.643136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.643167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.643289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.643317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.643434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.643462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.643658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.643687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.643938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.643967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.644088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.644140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.644260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.644288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.644470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.644498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.644594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.644627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.644874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.644901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.645009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.645037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.645154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.645182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.645461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.645490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.645655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.645683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.645791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.645820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.645954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.645981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.646113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.646143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.646355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.646611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.646638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.646803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.646831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.646994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.647023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.647191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.374 [2024-12-16 16:42:12.647222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.374 qpair failed and we were unable to recover it. 00:36:24.374 [2024-12-16 16:42:12.647388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.647417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.647535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.647563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.647728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.647756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.647939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.647968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.648084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.648122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.648232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.648262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.648481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.648510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.648633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.648661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.648785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.648813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.648917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.648946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.649158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.649187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.649311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.649338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.649450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.649480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.649648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.649676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.649799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.649826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.650031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.650060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.650170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.650200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.650437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.650465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.650671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.650700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.650803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.650831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.651017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.651043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.651245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.651280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.651381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.651408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.651610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.651642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.651739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.651768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.651866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.651897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.652118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.652152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.652355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.652386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.652488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.652518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.652640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.652671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.652847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.652878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.653058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.653088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.653295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.653326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.653453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.653483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.653665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.653697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.653873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.653904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.654022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.654052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.654175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.654207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.654341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.375 [2024-12-16 16:42:12.654371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.375 qpair failed and we were unable to recover it. 00:36:24.375 [2024-12-16 16:42:12.654547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.654576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.654763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.654794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.655030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.655060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.655258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.655290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.655429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.655459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.655579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.655609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.655791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.655822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.656018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.656049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.656249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.656281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.656411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.656440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.656561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.656591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.656767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.656798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.656982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.657013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.657187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.657219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.657345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.657374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.657550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.657582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.657687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.657715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.657830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.657861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.657973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.658002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.658127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.658158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.658373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.658404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.658504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.658535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.658742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.658772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.658891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.658927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.659035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.659065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.659245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.659276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.659381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.659619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.659651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.659835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.659865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.659977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.660008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.660111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.660144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.660327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.660359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.660479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.660510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.660689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.660719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.660836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.660985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.661015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.661151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.661184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.661310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.661341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.661470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.661499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.376 [2024-12-16 16:42:12.661670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.376 [2024-12-16 16:42:12.661700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.376 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.661802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.661832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.662027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.662058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.662246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.662278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.662390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.662421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.662534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.662563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.662677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.662708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.662914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.662944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.663073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.663115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.663351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.663382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.663570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.663600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.663793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.663830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.663936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.663967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.664084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.664125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.664312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.664343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.664521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.664551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.664728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.664759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.664873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.664902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.665116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.665149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.665269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.665300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.665443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.665577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.665608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.665712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.665743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.665855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.665885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.666081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.666143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.666271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.666303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.666414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.666444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.666572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.666603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.666705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.666735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.666918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.666947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.667068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.667110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.667230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.667259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.667372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.377 [2024-12-16 16:42:12.667403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.377 qpair failed and we were unable to recover it. 00:36:24.377 [2024-12-16 16:42:12.667569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.667600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.667796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.667826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.667933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.667963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.668134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.668167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.668278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.668308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.668417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.668446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.668688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.668720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.668846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.668876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.668986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.669016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.669148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.669182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.669349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.669381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.669554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.669584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.669753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.669783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.669966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.669997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.670228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.670261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.670395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.670426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.670543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.670574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.670692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.670722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.670825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.670858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.671066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.671213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.671349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.671508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.671660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.671866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.671992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.672023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.672197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.672229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.672338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.672370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.672474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.672505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.672624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.672656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.672836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.672867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.672970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.673000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.673123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.673156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.673361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.378 [2024-12-16 16:42:12.673393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.378 qpair failed and we were unable to recover it. 00:36:24.378 [2024-12-16 16:42:12.673568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.673808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.673839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.673942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.673973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.674090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.674131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.674311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.674432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.674464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.674568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.674599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.674718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.674748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.674989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.675021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.675141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.675175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.675349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.675379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.675563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.675593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.675772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.675810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.675922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.675954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.676129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.676161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.676290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.676320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.676424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.676456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.676582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.676612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.676728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.676759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.676876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.676907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.677012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.677042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.677241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.677275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.677377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.677408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.677513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.677544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.677718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.677750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.677993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.678025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.678202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.678235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.678410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.678441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.678545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.678577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.678680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.678711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.678885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.678915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.679034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.679065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.679255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.679286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.679400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.679430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.679548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.679579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.679699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.679730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.679899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.679930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.680110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.680143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.680263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.680293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.379 qpair failed and we were unable to recover it. 00:36:24.379 [2024-12-16 16:42:12.680459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.379 [2024-12-16 16:42:12.680490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.680595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.680628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.680810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.680842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.681052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.681082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.681289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.681319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.681431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.681463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.681591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.681623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.681803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.681833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.681947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.681980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.682224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.682259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.682441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.682473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.682670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.682700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.682802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.682832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.683021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.683053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.683172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.683210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.683318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.683349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.683471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.683502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.683739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.683770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.683875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.684894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.684924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.685026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.685056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.685262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.685296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.685422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.685453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.685648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.685678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.685805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.685953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.685983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.686165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.686197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.686370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.686400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.686503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.686535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.686661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.686692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.686823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.686853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.687025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.687057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.687207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.687239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.687351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.380 [2024-12-16 16:42:12.687381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.380 qpair failed and we were unable to recover it. 00:36:24.380 [2024-12-16 16:42:12.687564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.687596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.687760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.687798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.687907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.688047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.688078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.688269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.688302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.688472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.688503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.688649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.688680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.688848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.688879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.689059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.689091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.689210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.689240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.689450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.689482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.689685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.689789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.689819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.689926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.689956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.690079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.690124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.690228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.690258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.690400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.690574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.690604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.690778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.690808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.690999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.691031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.691166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.691198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.691304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.691334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.691442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.691586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.691616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.691746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.691777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.692042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.692073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.692230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.692264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.692371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.692401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.692575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.692606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.692727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.692759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.692877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.692907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.693081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.693125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.693249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.693280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.693522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.693554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.381 [2024-12-16 16:42:12.693724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.381 [2024-12-16 16:42:12.693755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.381 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.693970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.694002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.694236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.694269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.694403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.694433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.694555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.694587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.694790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.694821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.694997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.695028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.695198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.695230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.695346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.695382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.695499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.695529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.695652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.695683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.695786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.695817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.695988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.696019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.696128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.696160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.696335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.696366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.696589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.696776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.696808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.697062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.697108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.697227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.697258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.697437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.697467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.697645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.697677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.697867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.697898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.698015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.698046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.698200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.698232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.698412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.698445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.698568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.698598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.698710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.698741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.698919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.698951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.699066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.699108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.699294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.699433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.699463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.699637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.699667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.699846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.699876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.700119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.700318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.700349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.700461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.700493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.700682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.700713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.700830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.700860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.701066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.701106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.701302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.382 [2024-12-16 16:42:12.701333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.382 qpair failed and we were unable to recover it. 00:36:24.382 [2024-12-16 16:42:12.701503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.701533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.701702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.701733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.701969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.702000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.702147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.702335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.702367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.702546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.702576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.702749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.702780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.702880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.702910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.703134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.703166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.703357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.703390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.703582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.703613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.703801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.703832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.704016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.704047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.704188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.704219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.704402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.704432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.704604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.704634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.704742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.704772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.704884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.704914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.705091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.705135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.705259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.705290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.705478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.705508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.705679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.705709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.705881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.705916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.706117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.706150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.706329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.706358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.706551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.706580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.706757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.706787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.706907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.706943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.707063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.707092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.707249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.707279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.707391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.707422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.707525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.707555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.707670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.707702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.707962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.707992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.708121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.708153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.708289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.708318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.708434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.708472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.708658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.708688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.708808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.708839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.708959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.383 [2024-12-16 16:42:12.708990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.383 qpair failed and we were unable to recover it. 00:36:24.383 [2024-12-16 16:42:12.709106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.709137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.709311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.709344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.709550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.709580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.709698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.709727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.709917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.709948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.710117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.710149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.710334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.710366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.710560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.710592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.710727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.710756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.710871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.710902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.711012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.711044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.711183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.711214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.711330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.711360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.711458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.711487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.711662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.711691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.711866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.711897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.712075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.712118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.712323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.712355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.712524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.712554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.712661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.712690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.712882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.712913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.713168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.713201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.713319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.713350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.713459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.713489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.713610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.713642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.713761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.713790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.713981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.714012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.714265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.714298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.714486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.714516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.714641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.714672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.714861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.714891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.715079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.715119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.715261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.715292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.715418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.715448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.715580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.715609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.715798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.715827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.716003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.716033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.716168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.716201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.716325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.716354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.384 [2024-12-16 16:42:12.716469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.384 [2024-12-16 16:42:12.716500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.384 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.716610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.716639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.716757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.716788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.716970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.717001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.717180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.717212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.717383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.717413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.717613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.717644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.717810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.717840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.718015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.718046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.718317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.718349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.718462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.718492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.718607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.718637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.718778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.718809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.718926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.718956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.719072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.719126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.719230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.719259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.719430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.719460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.719637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.719668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.719853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.719883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.720049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.720080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.720224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.720255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.720435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.720465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.720667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.720697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.720869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.720900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.721013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.721043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.721168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.721205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.721405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.721437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.721612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.721828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.721858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.721980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.722010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.722251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.722284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.722535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.722565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.722698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.722728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.722909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.722939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.723126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.723158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.723329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.723359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.723469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.723499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.723748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.723779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.723892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.723921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.724051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.724080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.385 qpair failed and we were unable to recover it. 00:36:24.385 [2024-12-16 16:42:12.724337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.385 [2024-12-16 16:42:12.724368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.724588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.724687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.724719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.724904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.724933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.725150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.725181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.725295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.725324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.725512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.725543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.725729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.725761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.725863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.725892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.725996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.726026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.726147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.726180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.726314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.726344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.726461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.726491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.726686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.726718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.726832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.726862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.726983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.727127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.727358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.727509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.727641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.727793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.727937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.727968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.728086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.728125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.728263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.728294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.728418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.728448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.728568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.728597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.728794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.728830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.728972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.729003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.729115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.729146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.729255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.729286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.729424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.729613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.729645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.729762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.729792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.386 qpair failed and we were unable to recover it. 00:36:24.386 [2024-12-16 16:42:12.730044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.386 [2024-12-16 16:42:12.730074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.730218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.730249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.730443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.730474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.730585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.730615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.730806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.730836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.731050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.731081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.731269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.731300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.731441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.731472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.731640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.731671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.731846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.731877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.732047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.732077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.732295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.732326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.732438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.732468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.732643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.732674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.732790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.732819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.732941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.732971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.733143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.733176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.733309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.733339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.733493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.733732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.733762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.733877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.733914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.734019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.734049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.734172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.734204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.734321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.734351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.734539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.734568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.734711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.734742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.734916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.734947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.735127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.735159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.735294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.735324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.735502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.735532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.735653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.735683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.735931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.735968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.736079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.736138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.736244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.736274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.736442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.736513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.736660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.736696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.736870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.736904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.737076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.737338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.737372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.737480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.387 [2024-12-16 16:42:12.737512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.387 qpair failed and we were unable to recover it. 00:36:24.387 [2024-12-16 16:42:12.737703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.737735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.737912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.737943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.738058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.738089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.738230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.738262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.738380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.738412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.738670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.738702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.738807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.738837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.738958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.738999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.739171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.739205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.739328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.739360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.739494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.739607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.739638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.739838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.739871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.739988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.740020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.740257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.740291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.740404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.740435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.740564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.740595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.740711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.740742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.740913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.740946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.741119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.741152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.741341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.741373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.741493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.741526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.741650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.741681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.741799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.741831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.742024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.742055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.742182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.742216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.742335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.742367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.742541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.742573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.742697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.742730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.742925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.742956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.743064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.743108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.743237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.743268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.743371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.743403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.743524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.743556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.743746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.743782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.743968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.744000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.744106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.744138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.744349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.744381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.744504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.744535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.388 qpair failed and we were unable to recover it. 00:36:24.388 [2024-12-16 16:42:12.744670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.388 [2024-12-16 16:42:12.744702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.744808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.744839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.744952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.744982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.745173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.745207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.745317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.745346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.745447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.745480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.745603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.745634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.745770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.745990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.746021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.746200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.746337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.746368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.746474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.746505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.746693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.746723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.746849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.746879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.747066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.747107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.747237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.747267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.747384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.747414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.747599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.747629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.747744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.747776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.747882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.747914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.748087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.748129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.748308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.748340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.748462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.748503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.748677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.748707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.748822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.748853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.749025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.749055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.749239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.749270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.749453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.749482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.749661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.749693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.749799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.749830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.750069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.750112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.750226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.750256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.750379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.750409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.750576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.750608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.750720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.750750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.750865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.750896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.751042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.751072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.751201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.751232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.751418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.751449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.751613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.389 [2024-12-16 16:42:12.751645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.389 qpair failed and we were unable to recover it. 00:36:24.389 [2024-12-16 16:42:12.751824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.751854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.751966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.751997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.752135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.752169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.752290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.752321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.752432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.752462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.752635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.752665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.752779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.752808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.752916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.752947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.753129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.753160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.753353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.753389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.753586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.753617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.753797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.753828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.753947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.753979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.754160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.754192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.754384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.754413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.754529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.754561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.754664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.754693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.754799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.754830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.754998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.755028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.755205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.755236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.755407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.755438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.755573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.755605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.755707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.755738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.755855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.755887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.756002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.756033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.756300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.756332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.756437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.756467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.756573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.756604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.756857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.756887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.757073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.757117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.757335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.757366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.757495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.757526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.757764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.757795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.757983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.758014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.758133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.758166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.390 [2024-12-16 16:42:12.758279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.390 [2024-12-16 16:42:12.758309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.390 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.758430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.758462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.758594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.758626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.758743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.758773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.758957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.758989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.759215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.759247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.759429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.759460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.759654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.759686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.759863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.759894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.760015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.760047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.760170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.760201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.760409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.760440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.760552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.760582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.760823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.760853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.760981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.761012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.761136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.761175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.761347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.761379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.761507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.761539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.761662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.761691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.761811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.761842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.762038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.762069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.762208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.762240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.762416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.762447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.762553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.762584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.762706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.762736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.762976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.763007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.763180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.763212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.763392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.763423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.763553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.763584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.763771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.763805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.763984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.764015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.764129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.764163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.764269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.764300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.764480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.764511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.764628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.764659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.764769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.764799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.764972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.765002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.765166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.765198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.765373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.765403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.765643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.391 [2024-12-16 16:42:12.765674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.391 qpair failed and we were unable to recover it. 00:36:24.391 [2024-12-16 16:42:12.765878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.765908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.766105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.766139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.766321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.766352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.766468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.766499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.766679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.766710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.766899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.766930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.767146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.767179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.767290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.767321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.767494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.767525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.767726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.767757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.767874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.767904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.768022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.768053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.768184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.768217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.768331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.768361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.768516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.768692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.768722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.768985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.769148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.769308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.769517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.769660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.769806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.769954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.769986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.770126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.770158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.770269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.770299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.770474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.770505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.770611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.770642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.770759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.770789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.770890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.770921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.771059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.771218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.771369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.771506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.771643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.771809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.771998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.772028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.772151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.772183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.772356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.772388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.772493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.772525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.772769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.392 [2024-12-16 16:42:12.772802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.392 qpair failed and we were unable to recover it. 00:36:24.392 [2024-12-16 16:42:12.772974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.773114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.773257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.773410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.773564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.773698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.773843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.773873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.774004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.774035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.774243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.774277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.774481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.774512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.774634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.774664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.774767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.774797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.774974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.775004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.775133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.775165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.775271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.775302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.775449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.775479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.775603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.775634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.775843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.775875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.776059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.776089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.776287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.776319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.776447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.776478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.776587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.776618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.776748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.776778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.776908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.776938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.777045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.777076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.777228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.777260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.777384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.777417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.777528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.777560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.777737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.777770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.777871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.777903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.778015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.778045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.778260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.778295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.778408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.778439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.778560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.778590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.778778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.778808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.778928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.778958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.779171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.779204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.779312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.779342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.779511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.779542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.393 qpair failed and we were unable to recover it. 00:36:24.393 [2024-12-16 16:42:12.779645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.393 [2024-12-16 16:42:12.779676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.779873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.780082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.780125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.780234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.780266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.780378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.780410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.780537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.780574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.780699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.780730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.780968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.780999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.781119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.781151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.781352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.781383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.781552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.781583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.781685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.781715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.781900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.781931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.782047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.782078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.782338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.782370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.782481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.782511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.782630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.782662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.782768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.782798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.782953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.783071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.783115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.783285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.783316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.783446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.783476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.783592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.783622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.783736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.783767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.783951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.783981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.784185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.784217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.784326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.784356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.784473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.784505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.784620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.784651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.784769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.784800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.784973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.785256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.785391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.785537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.785673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.785812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.785944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.785972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.786081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.786122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.786292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.786320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.786522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.394 [2024-12-16 16:42:12.786550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.394 qpair failed and we were unable to recover it. 00:36:24.394 [2024-12-16 16:42:12.786671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.786699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.786818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.786846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.786947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.786976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.787234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.787264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.787382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.787409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.787517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.787546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.787667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.787694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.787859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.787888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.788192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.788223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.788345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.788374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.788485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.788512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.788675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.788702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.788875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.788904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.789024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.789051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.789275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.789305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.789414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.789442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.789550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.789578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.789700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.789727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.789935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.789964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.790073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.790110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.790283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.790311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.790484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.790511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.790678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.790708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.790825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.790853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.790970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.790999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.791128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.791157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.791442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.791471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.791650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.791679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.791864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.791895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.791993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.792020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.792142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.792173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.792337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.792365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.792549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.792577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.792676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.792709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.792890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.395 [2024-12-16 16:42:12.792917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.395 qpair failed and we were unable to recover it. 00:36:24.395 [2024-12-16 16:42:12.793108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.793137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.793236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.793263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.793378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.793406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.793569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.793710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.793739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.793838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.793865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.793976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.794005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.794222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.794324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.794352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.794514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.794543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.794730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.794761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.794935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.794966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.795088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.795130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.795303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.795334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.795505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.795536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.795773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.795804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.795987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.796019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.796148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.796182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.796381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.796412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.796525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.796557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.796664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.796695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.796885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.796916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.797028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.797058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.797298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.797332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.797470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.797501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.797681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.797718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.797888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.798027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.798057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.798180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.798213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.798321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.798350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.798533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.798564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.798745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.798775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.798947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.798978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.799108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.799141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.799313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.799344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.799469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.799500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.799605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.799637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.799803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.799835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.799958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.799990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.396 [2024-12-16 16:42:12.800160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.396 [2024-12-16 16:42:12.800231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.396 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.800445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.800481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.800684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.800716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.800841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.801010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.801042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.801221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.801252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.801361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.801392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.801580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.801611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.801724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.801755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.801948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.801978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.802119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.802153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.802256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.802287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.802461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.802491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.802673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.802715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.802904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.802935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.803171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.803203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.803372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.803404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.803528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.803558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.803688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.803720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.803832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.803863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.804045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.804076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.804218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.804250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.804480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.804511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.804641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.804673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.804852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.804882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.805005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.805035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.805301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.805332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.805461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.805494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.805614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.805644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.805822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.805859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.805973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.806004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.806198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.806231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.806349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.806380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.806488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.806522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.806705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.806736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.806908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.806939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.807121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.807155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.807258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.807289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.807422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.807453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.397 [2024-12-16 16:42:12.807578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.397 [2024-12-16 16:42:12.807610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.397 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.807790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.807823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.807938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.807970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.808125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.808311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.808343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.808445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.808476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.808585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.808617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.808812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.808843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.808966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.808998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.809109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.809141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.809242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.809274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.809463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.809495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.809692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.809723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.809834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.809865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.810018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.810215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.810247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.810357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.810388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.810502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.810534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.810651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.810682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.810785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.810816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.811025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.811056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.811235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.811268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.811393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.811424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.811597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.811629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.811747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.811779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.811883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.811914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.812091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.812134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.812385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.812416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.812533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.812565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.812677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.812709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.812878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.812910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.813136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.813170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.813290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.813322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.813496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.813529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.813656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.813688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.813802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.813833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.813999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.814030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.814153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.814186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.814311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.398 [2024-12-16 16:42:12.814343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.398 qpair failed and we were unable to recover it. 00:36:24.398 [2024-12-16 16:42:12.814524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.814555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.814661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.814692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.814806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.814838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.814953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.814984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.815167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.815447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.815660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.815692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.815867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.815898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.816015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.816046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.816168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.816200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.816389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.816420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.816532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.816564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.816756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.816788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.816916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.816947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.817074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.817116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.817374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.817413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.817595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.817626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.817737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.817768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.817962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.817994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.818172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.818205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.818400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.818432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.818625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.818657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.818830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.818861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.818980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.819012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.819218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.819252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.819381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.819412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.819542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.819574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.821051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.821130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.821384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.821418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.821564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.821596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.821782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.821814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.822025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.822056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.822195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.822227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.822480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.399 [2024-12-16 16:42:12.822512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.399 qpair failed and we were unable to recover it. 00:36:24.399 [2024-12-16 16:42:12.822690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.822721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.822847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.822878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.823002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.823033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.823150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.823181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.823295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.823325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.823438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.823469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.823594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.823624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.823821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.823851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.824045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.824076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.824194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.824226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.824397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.824427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.824543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.824573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.824673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.824704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.824894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.824925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.825111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.825143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.825328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.825359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.825491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.825522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.825653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.825839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.825871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.826054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.826085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.826201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.826232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.826338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.826374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.826564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.826595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.826722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.826752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.826927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.826958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.827132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.827165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.827267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.827297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.827478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.827508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.827608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.827639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.827830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.827861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.827977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.828120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.828260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.828414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.828548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.828755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.828958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.829092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.829150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.829258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.400 [2024-12-16 16:42:12.829289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.400 qpair failed and we were unable to recover it. 00:36:24.400 [2024-12-16 16:42:12.829404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.829566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.829597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.829769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.829800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.829974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.830005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.830121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.830153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.830347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.830377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.830516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.830546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.830663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.830695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.830812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.830843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.831020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.831052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.831178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.831208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.831309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.831340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.831540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.831572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.831747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.831778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.831968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.831999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.832120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.832152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.832297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.832447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.832478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.832584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.832617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.832742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.832774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.832874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.832904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.833084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.833124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.833310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.833357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.833467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.833499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.833609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.833639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.833740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.833770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.833872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.833901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.834091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.834154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.834251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.834283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.834408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.834438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.834622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.834653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.834770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.834801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.834927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.834957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.835060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.835091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.835302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.835334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.835441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.835472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.835599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.835630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.835755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.835787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.835907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.401 [2024-12-16 16:42:12.835938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.401 qpair failed and we were unable to recover it. 00:36:24.401 [2024-12-16 16:42:12.836061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.836091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.836222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.836253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.836458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.836489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.836608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.836640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.836774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.836805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.836920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.836950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.837126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.837158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.837333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.837364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.837552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.837582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.837697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.837728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.837907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.837944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.838124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.838155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.838324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.838355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.838554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.838584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.838691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.838721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.838893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.838924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.839092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.839132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.839242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.839274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.839444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.839475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.839583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.839613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.839784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.839815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.839991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.840022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.840206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.840239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.840370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.840401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.840573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.840696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.840727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.840835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.840865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.841966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.841997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.842109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.842142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.842323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.842355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.842467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.842498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.842618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.842650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.842756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.402 [2024-12-16 16:42:12.842787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.402 qpair failed and we were unable to recover it. 00:36:24.402 [2024-12-16 16:42:12.842891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.842921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.843056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.843088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.843303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.843336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.843546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.843577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.843780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.844087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.844128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.844237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.844268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.844450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.844481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.844658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.844690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.844803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.844833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.844943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.844974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.845157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.845194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.845388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.845418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.845605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.845636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.845808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.845839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.845963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.845994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.846136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.846169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.846281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.846312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.846415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.846632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.846664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.846768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.846799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.847016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.847047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.847172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.847204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.847375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.847406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.847596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.847627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.847736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.847767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.847883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.847915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.848089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.848131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.848320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.848351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.848466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.848498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.848606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.848637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.848748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.848779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.848953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.848984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.849090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.849134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.849303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.849334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.849433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.849464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.403 [2024-12-16 16:42:12.849564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.403 [2024-12-16 16:42:12.849596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.403 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.849815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.849846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.850026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.850057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.850185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.850218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.850490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.850521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.850640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.850671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.850840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.850871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.851058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.851089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.851296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.851328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.851441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.851674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.851704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.851818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.851849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.852055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.852087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.852208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.852239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.852350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.852381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.852493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.852530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.852797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.852828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.853036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.853067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.853322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.853354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.853521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.853552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.853797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.853828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.854011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.854042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.854238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.854388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.854419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.854605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.854636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.854822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.854852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.854964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.854995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.855176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.855209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.855331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.855362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.855472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.855503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.855631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.855663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.855850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.855881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.855980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.856010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.404 [2024-12-16 16:42:12.856183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.404 [2024-12-16 16:42:12.856215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.404 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.856448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.856479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.856594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.856625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.856730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.856761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.856883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.856914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.857039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.857070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.857265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.857296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.857418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.857449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.857661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.857693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.857868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.857899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.858067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.858109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.858233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.858264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.858466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.858497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.858681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.858712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.858835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.858866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.859059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.859117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.859292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.859322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.859491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.859522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.859727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.859758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.859880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.859910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.860027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.860058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.860184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.860216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.860330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.860366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.860545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.860575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.860685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.860716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.860910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.860941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.861041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.861072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.861265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.861297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.861483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.861513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.861624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.861655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.861835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.861867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.862042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.862072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.862268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.862300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.862528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.862558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.862680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.862709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.862832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.862863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.863049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.863081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.863225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.405 [2024-12-16 16:42:12.863257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.405 qpair failed and we were unable to recover it. 00:36:24.405 [2024-12-16 16:42:12.863367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.863396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.863505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.863535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.863773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.863804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.863973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.864004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.864176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.864209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.864343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.864374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.864568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.864599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.864698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.864729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.864836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.864866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.864979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.865008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.865128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.865160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.865280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.865313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.865414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.865445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.865636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.865667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.865838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.865868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.866045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.866075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.866203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.866234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.866336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.866367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.866471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.866501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.866705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.866735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.866972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.867003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.867174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.867206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.867329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.867358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.867616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.867647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.867768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.867804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.867982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.868013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.868196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.868228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.868346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.868376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.868560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.868591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.868702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.868733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.868840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.868871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.868973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.869191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.869321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.869518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.869652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.869817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.869964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.869994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.870173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.870205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.406 qpair failed and we were unable to recover it. 00:36:24.406 [2024-12-16 16:42:12.870325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.406 [2024-12-16 16:42:12.870356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.870529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.870559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.870658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.870689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.870910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.870941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.871063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.871102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.871291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.871322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.871440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.871471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.871705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.871736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.871924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.871955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.872136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.872168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.872296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.872327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.872464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.872495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.872688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.872720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.872837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.872869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.873136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.873168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.873274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.873304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.873412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.873443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.873624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.873655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.873763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.873793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.873977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.874122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.874263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.874408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.874566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.874721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.874920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.874957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.875077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.875118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.875284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.875315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.875434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.875465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.875661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.875691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.876033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.876064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.876227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.876298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.876503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.876539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.876657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.876687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.876861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.876892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.877030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.877061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.877208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.877250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.877385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.407 [2024-12-16 16:42:12.877415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.407 qpair failed and we were unable to recover it. 00:36:24.407 [2024-12-16 16:42:12.877590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.877621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.877725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.877755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.877929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.877959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.878159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.878192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.878303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.878335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.878509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.878541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.878669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.878699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.878887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.878919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.879130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.879163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.879287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.879318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.879494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.879524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.879629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.879658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.879760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.879790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.879912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.879943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.880076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.880117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.880257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.880372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.880402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.880572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.880603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.880716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.880748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.880867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.880898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.881018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.881050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.881178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.881210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.881378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.881410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.881512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.881542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.881710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.881741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.881852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.881882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.882070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.882117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.882229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.882262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.882450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.882482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.882614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.882644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.882756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.882787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.882956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.882986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.883106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.883138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.883246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.408 [2024-12-16 16:42:12.883277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.408 qpair failed and we were unable to recover it. 00:36:24.408 [2024-12-16 16:42:12.883450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.883481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.883662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.883692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.883796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.883827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.883945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.883976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.884117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.884150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.884254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.884285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.884493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.884525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.884634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.884664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.884782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.884814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.884935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.884967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.885159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.885192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.885309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.885341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.885506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.885737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.885885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.885917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.886019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.886049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.886183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.886216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.886331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.886361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.886573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.886604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.886731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.886763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.886917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.887040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.887070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.887185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.887217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.887409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.887439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.887613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.887644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.887839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.887869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.887994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.888168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.888324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.888465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.888622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.888764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.888961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.888997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.889120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.889153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.889344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.889375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.889492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.889523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.889647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.889678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.889813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.889845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.889955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.409 [2024-12-16 16:42:12.889987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.409 qpair failed and we were unable to recover it. 00:36:24.409 [2024-12-16 16:42:12.890167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.890200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.890389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.890420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.890526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.890557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.890671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.890701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.890875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.890906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.891078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.891236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.891267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.891376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.891408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.891527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.891559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.891773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.891804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.891947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.891978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.892140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.892172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.892290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.892322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.892518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.892549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.892651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.892684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.892876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.892907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.893021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.893053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.893255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.893289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.893397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.893429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.893670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.893701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.893831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.893863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.893982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.894012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.894220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.894447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.894477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.894587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.894619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.894741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.894771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.894939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.894970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.895084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.895126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.895297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.895328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.895447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.895479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.895648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.895679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.895861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.895892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.896081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.896120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.896230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.896267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.896371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.896402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.896606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.896637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.896779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.896809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.896919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.896951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.410 qpair failed and we were unable to recover it. 00:36:24.410 [2024-12-16 16:42:12.897056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.410 [2024-12-16 16:42:12.897087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.897231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.897264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.897454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.897485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.897600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.897631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.897821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.897959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.897990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.898164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.898197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.898299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.898330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.898461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.898493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.898677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.898709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.898837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.898869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.899039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.899070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.899200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.899232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.899514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.899545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.899804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.899836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.899973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.900004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.900185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.900219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.900397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.900427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.900616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.900647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.900923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.900955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.901230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.901262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.901459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.901490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.901602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.901634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.901748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.901779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.902055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.902281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.902433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.902578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.902726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.902877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.902999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.903030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.903216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.903250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.903431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.903462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.903724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.903755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.903935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.903966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.904074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.904117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.904292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.904323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.904430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.904461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.904668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.411 [2024-12-16 16:42:12.904699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.411 qpair failed and we were unable to recover it. 00:36:24.411 [2024-12-16 16:42:12.904816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.904847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.905026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.905057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.905205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.905237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.905408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.905439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.905641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.905672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.905789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.905820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.905989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.906020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.906147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.906179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.906284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.906314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.906489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.906520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.906643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.906674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.906842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.906874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.907039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.907071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.907343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.907375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.907485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.907515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.907639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.907670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.907853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.907883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.908080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.908121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.908239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.908458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.908489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.908680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.908711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.908909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.908940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.909065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.909104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.909224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.909255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.909434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.909465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.909577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.909608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.909796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.909827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.909942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.909972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.910153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.910185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.910305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.910336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.910509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.910540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.910641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.910672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.910846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.910878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.911064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.911103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.911208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.911239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.911417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.911448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.911615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.911658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.911869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.911900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.912020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.912050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.912194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.412 [2024-12-16 16:42:12.912226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.412 qpair failed and we were unable to recover it. 00:36:24.412 [2024-12-16 16:42:12.912339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.912369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.912607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.912638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.912736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.912767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.912887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.912918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.913045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.913076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.913210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.913241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.913387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.913418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.913583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.913614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.913801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.913832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.914031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.914195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.914338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.914478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.914707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.914853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.914971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.915001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.915182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.915215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.915387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.915417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.915593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.915625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.915812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.915842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.915965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.915996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.916131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.916164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.916357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.916387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.916559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.916590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.916706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.916737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.916847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.916878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.917125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.917157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.917339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.917369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.917475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.917505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.917671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.917702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.917881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.917911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.918019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.918051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.918196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.918228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.918347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.918378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.413 [2024-12-16 16:42:12.918548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.413 [2024-12-16 16:42:12.918579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.413 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.918818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.918849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.918968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.919005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.919142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.919175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.919399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.919429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.919554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.919585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.919693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.919723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.919833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.919863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.920115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.920148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.920276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.920307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.920475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.920507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.920651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.920682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.920801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.920833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.920963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.920993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.921127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.921160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.921345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.921376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.921504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.921536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.921641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.921910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.921942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.922123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.922156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.922270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.922301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.922421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.922453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.922565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.922596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.922715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.922746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.922878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.922908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.923031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.923062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.923253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.923286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.923455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.923486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.923657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.923688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.923861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.923893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.924119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.924153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.924329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.924360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.924528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.924560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.924678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.924708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.924889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.924920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.925105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.925137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.925324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.925355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.925469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.925501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.925620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.925651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.925758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.925790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.414 [2024-12-16 16:42:12.925982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.414 [2024-12-16 16:42:12.926013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.414 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.926140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.926174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.926298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.926335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.926502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.926534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.926734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.926765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.926952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.926983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.927092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.927132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.927242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.927277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.927391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.927421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.927527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.927558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.927751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.927782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.927883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.927915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.928029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.928060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.928183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.928214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.928388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.928419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.928530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.928561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.928677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.928708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.928825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.928856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.929938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.929969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.930140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.930173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.930284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.930315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.930554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.930586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.930714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.930744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.930859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.930896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.931114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.931250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.931419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.931564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.931729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.931875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.931985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.932016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.932184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.932217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.932336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.932366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.932474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.932506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.932613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.932643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.932816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.932848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.933023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.933054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.933176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.933207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.933396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.933427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.933599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.415 [2024-12-16 16:42:12.933632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.415 qpair failed and we were unable to recover it. 00:36:24.415 [2024-12-16 16:42:12.933834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.933864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.933974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.934005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.934116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.934150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.934319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.934350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.934464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.934495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.934732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.934763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.934948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.934978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.935134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.935275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.935306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.935475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.935506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.935696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.935727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.935832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.935862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.935966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.935998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.936126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.936159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.936356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.936386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.936502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.936532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.936632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.936663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.936830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.936861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.936974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.937124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.937276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.937420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.937624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.937766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.937943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.937975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.938166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.938200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.938305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.938336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.938469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.938499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.938603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.938634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.938735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.938765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.938933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.939129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.939163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.939293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.939323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.939441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.939473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.939665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.939696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.939806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.939836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.939935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.939967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.940172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.940205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.940374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.940404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.940584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.940616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.940792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.940822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.940942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.940973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.941117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.941149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.941266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.941299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.941552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.416 [2024-12-16 16:42:12.941583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.416 qpair failed and we were unable to recover it. 00:36:24.416 [2024-12-16 16:42:12.941753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.941784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.941984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.942015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.942171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.942204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.942323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.942354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.942469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.942500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.942678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.942709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.942812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.942843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.943033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.943065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.943382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.943415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.943529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.943561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.943760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.943790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.943957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.943988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.944103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.944135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.944310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.944342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.944454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.944486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.944685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.944716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.944859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.944991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.945023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.945154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.945193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.945360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.945390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.945579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.945611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.945795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.945826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.945950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.945980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.946128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.946161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.946269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.946301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.946409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.946439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.946679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.946710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.946884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.946917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.947023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.947055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.947234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.947265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.947436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.947468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.947643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.947674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.947805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.947836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.948009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.948040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.417 qpair failed and we were unable to recover it. 00:36:24.417 [2024-12-16 16:42:12.948226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.417 [2024-12-16 16:42:12.948259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.948392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.948422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.948591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.948622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.948849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.948880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.948987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.949192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.949346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.949479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.949625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.949766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.949968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.950136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.950169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.950339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.950370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.950484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.950514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.950622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.950652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.950821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.950852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.950963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.950995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.951105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.951137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.951260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.951291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.951473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.951503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.951625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.951657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.951902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.952011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.952149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.952413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.952581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.952612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.952727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.952758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.952973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.953003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.953182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.953213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.418 [2024-12-16 16:42:12.953397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.418 [2024-12-16 16:42:12.953429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.418 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.953554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.953585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.953770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.953801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.953989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.954020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.954193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.954226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.954421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.954451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.954552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.954583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.954766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.954797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.954912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.954943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.955079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.955119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.955360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.955392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.955492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.955522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.955770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.955801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.956002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.956034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.956206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.956237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.956342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.956373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.956505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.956536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.956718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.956748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.957011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.957041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.957236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.957267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.957382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.957412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.957593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.957623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.957731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.957762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.958038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.958069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.958260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.958291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.958399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.958429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.958542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.958573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.958793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.958909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.958940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.959115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.959148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.959327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.959359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.959467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.702 [2024-12-16 16:42:12.959498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.702 qpair failed and we were unable to recover it. 00:36:24.702 [2024-12-16 16:42:12.959604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.959634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.959747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.959778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.959889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.959920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.960036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.960072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.960194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.960226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.960464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.960495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.960679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.960710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.960842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.960873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.960981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.961122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.961257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.961402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.961557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.961693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.961905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.961936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.962054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.962085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.962237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.962270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.962400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.962431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.962625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.962656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.962770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.962801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.962916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.962947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.963082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.963123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.963238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.963269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.963475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.963505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.963691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.963722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.963840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.963871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.963980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.964012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.964194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.964232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.964338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.964368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.964564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.964595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.964773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.964804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.965049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.965080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.965204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.965236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.965352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.965382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.965497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.965528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.965660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.965690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.965861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.965892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.966006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.966036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.966137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.703 [2024-12-16 16:42:12.966170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.703 qpair failed and we were unable to recover it. 00:36:24.703 [2024-12-16 16:42:12.966379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.966411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.966585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.966616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.966872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.966992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.967154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.967309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.967444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.967584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.967729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.967869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.967900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.968005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.968036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.968155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.968187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.968375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.968406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.968579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.968611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.968818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.968848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.968970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.969144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.969295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.969455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.969601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.969729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.969868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.969898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.970019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.970050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.970237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.970269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.970395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.970426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.970616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.970647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.970763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.970794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.970964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.970994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.971104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.971147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.971283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.971315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.971422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.971453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.971630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.971662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.971840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.971870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.971975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.972006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.972139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.972173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.972282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.972313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.972415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.972445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.972572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.972603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.972732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.704 [2024-12-16 16:42:12.972762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.704 qpair failed and we were unable to recover it. 00:36:24.704 [2024-12-16 16:42:12.972998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.973029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.973206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.973348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.973555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.973586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.973801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.973832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.973936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.973973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.974180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.974213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.974416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.974649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.974680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.974806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.974837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.975025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.975055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.975245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.975277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.975400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.975431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.975612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.975643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.975827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.975858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.975983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.976014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.976246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.976280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.976500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.976531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.976634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.976666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.976792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.976824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.976928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.976959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.977136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.977169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.977277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.977308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.977490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.977521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.977649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.977680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.977864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.977895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.978004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.978035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.978210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.978243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.978439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.978469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.978583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.978614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.978805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.978835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.979001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.979033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.979157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.979190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.979374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.979405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.979509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.979540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.979751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.979783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.979902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.705 [2024-12-16 16:42:12.979932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.705 qpair failed and we were unable to recover it. 00:36:24.705 [2024-12-16 16:42:12.980035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.980067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.980313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.980383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.980590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.980627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.980843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.980877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.980986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.981016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.981144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.981178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.981286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.981317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.981418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.981450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.981626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.981840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.981871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.981980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.982012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.982231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.982264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.982390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.982422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.982530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.982562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.982680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.982711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.982813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.982844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.983021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.983053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.983259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.983293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.983424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.983455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.983664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.983695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.983873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.983907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.984128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.984243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.984276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.984519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.984551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.984788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.984818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.984930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.984962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.985076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.985119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.985360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.985391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.985509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.985539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.985710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.985741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.985865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.985896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.986065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.986370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.706 [2024-12-16 16:42:12.986401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.706 qpair failed and we were unable to recover it. 00:36:24.706 [2024-12-16 16:42:12.986524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.986557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.986695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.986726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.986836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.986867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.987120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.987152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.987359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.987390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.987519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.987550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.987658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.987688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.987792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.987823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.987945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.987977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.988079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.988120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.988246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.988277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.988387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.988418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.988537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.988571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.988746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.988777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.988874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.988905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.989081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.989118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.989300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.989337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.989449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.989480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.989652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.989683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.989859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.989891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.990021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.990051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.990167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.990199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.990385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.990417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.990545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.990577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.990743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.990775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.990904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.990936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.991046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.991078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.991255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.991287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.991477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.991509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.991628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.991659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.991836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.991868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.991971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.992002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.992111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.992318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.992350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.992458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.992489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.992660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.707 [2024-12-16 16:42:12.992691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.707 qpair failed and we were unable to recover it. 00:36:24.707 [2024-12-16 16:42:12.992812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.992843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.992960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.992992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.993106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.993137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.993254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.993286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.993475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.993507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.993619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.993649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.993751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.993783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.993962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.993992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.994164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.994197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.994311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.994342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.994581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.994612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.994736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.994767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.994948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.994979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.995935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.995966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.996088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.996126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.996256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.996289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.996466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.996497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.996608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.996638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.996754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.996786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.996905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.996936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.997037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.997068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.997260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.997292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.997495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.997526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.997700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.997730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.997832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.997863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.997992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.998023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.998210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.998242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.998356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.998387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.998628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.998659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.998851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.999136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.999169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.999376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.999493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.999523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.708 [2024-12-16 16:42:12.999634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.708 [2024-12-16 16:42:12.999665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.708 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:12.999786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:12.999816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:12.999931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:12.999962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.000063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.000104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.000221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.000253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.000437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.000467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.000579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.000610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.000787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.000818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.000927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.000958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.001176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.001297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.001329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.001434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.001465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.001569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.001601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.001805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.002025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.002055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.002277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.002309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.002509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.002540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.002792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.002823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.002929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.002962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.003084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.003126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.003297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.003328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.003457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.003496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.003605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.003637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.003824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.003855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.004027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.004058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.004175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.004208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.004376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.004407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.004525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.004752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.004784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.004978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.005009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.005144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.005177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.005285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.005316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.005486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.005516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.005634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.005666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.005771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.005801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.005980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.006012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.006218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.006251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.006438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.006470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.006679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.006709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.006901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.006933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.709 qpair failed and we were unable to recover it. 00:36:24.709 [2024-12-16 16:42:13.007193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.709 [2024-12-16 16:42:13.007226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.007343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.007375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.007585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.007616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.007740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.007771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.007959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.007990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.008225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.008257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.008439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.008469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.008641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.008673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.008852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.008884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.009055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.009086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.009207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.009244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.009486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.009517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.009625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.009656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.009839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.009870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.010082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.010123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.010314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.010346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.010606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.010637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.010819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.010850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.010988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.011020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.011211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.011245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.011350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.011380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.011579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.011610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.011780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.011812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.012002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.012033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.012328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.012360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.012475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.012506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.012702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.012732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.012913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.012946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.013142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.013175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.013370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.013402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.013511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.013540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.013650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.013681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.013812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.013843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.014031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.014063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.014247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa43c70 is same with the state(6) to be set 00:36:24.710 [2024-12-16 16:42:13.014647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.014719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.014924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.014960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.015145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.015180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.015297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.015331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.015506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.015539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.710 [2024-12-16 16:42:13.015645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.710 [2024-12-16 16:42:13.015678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.710 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.015846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.015882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.016149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.016183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.016382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.016414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.016613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.016645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.016825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.016856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.016960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.016992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.017232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.017265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.017488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.017604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.017636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.017860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.017891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.018003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.018041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.018256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.018288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.018395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.018428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.018602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.018633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.018734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.018765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.019005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.019036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.019217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.019250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.019435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.019467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.019606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.019637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.019809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.019841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.020050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.020081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.020262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.020295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.020413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.020445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.020625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.020657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.020874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.020906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.021146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.021179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.021344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.021660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.021692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.021912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.022083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.022126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.022249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.022282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.022460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.022491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.711 [2024-12-16 16:42:13.022640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.711 qpair failed and we were unable to recover it. 00:36:24.711 [2024-12-16 16:42:13.022904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.022935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.023055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.023087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.023342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.023374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.023547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.023757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.023789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.023978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.024009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.024130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.024162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.024359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.024393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.024567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.024599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.024850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.024881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.025063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.025108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.025297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.025331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.025504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.025536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.025650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.025682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.025954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.025987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.026165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.026199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.026315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.026348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.026451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.026490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.026595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.026626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.026838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.026871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.026997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.027030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.027171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.027204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.027442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.027475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.027671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.027704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.027833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.027865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.027971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.028003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.028187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.028221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.028342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.028374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.028520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.028563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.028784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.028835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.029078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.029159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.029307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.029342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.029475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.029508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.029635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.029665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.029882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.030178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.030302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.030333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.030513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.030543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.030735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.712 [2024-12-16 16:42:13.030765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.712 qpair failed and we were unable to recover it. 00:36:24.712 [2024-12-16 16:42:13.030948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.030979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.031233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.031265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.031398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.031429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.031599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.031631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.031796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.031828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.032033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.032114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.032319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.032357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.032478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.032510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.032692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.032724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.032916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.032949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.033117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.033149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.033365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.033397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.033647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.033679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.033861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.033892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.034059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.034091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.034252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.034285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.034551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.034583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.034817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.034849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.034976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.035009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.035146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.035181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.035371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.035403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.035594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.035626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.035808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.035839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.036113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.036146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.036336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.036368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.036500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.036533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.036776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.036808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.036994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.037026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.037291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.037324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.037495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.037527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.037815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.037846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.037970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.038131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.038165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.038424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.038456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.038637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.038669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.038857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.038889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.038998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.039029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.039211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.039245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.039437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.713 [2024-12-16 16:42:13.039470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.713 qpair failed and we were unable to recover it. 00:36:24.713 [2024-12-16 16:42:13.039637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.039670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.039923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.039956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.040088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.040129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.040345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.040377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.040560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.040592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.040697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.040730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.040912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.040949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.041133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.041165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.041275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.041307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.041569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.041601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.041770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.041802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.042003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.042036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.042250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.042283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.042510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.042543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.042731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.042763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.042947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.042979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.043149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.043183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.043422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.043454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.043641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.043673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.043952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.043984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.044194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.044228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.044402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.044434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.044623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.044869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.044901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.045012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.045043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.045163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.045196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.045379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.045411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.045677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.045709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.045827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.045859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.046040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.046072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.046323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.046355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.046472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.046504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.046761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.046793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.046900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.046932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.047192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.047226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.047466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.047498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.047625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.047657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.047772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.047804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.047997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.048028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.714 qpair failed and we were unable to recover it. 00:36:24.714 [2024-12-16 16:42:13.048204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.714 [2024-12-16 16:42:13.048237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.048411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.048443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.048576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.048608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.048725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.048757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.048939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.048971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.049157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.049190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.049323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.049355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.049563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.049681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.049713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.049894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.049926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.050045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.050076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.050266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.050299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.050484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.050516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.050778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.050810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.050983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.051015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.051264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.051298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.051549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.051581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.051843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.051876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.051993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.052025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.052160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.052193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.052429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.052462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.052706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.052739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.052986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.053018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.053130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.053163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.053334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.053366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.053541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.053573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.053757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.053789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.053957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.053989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.054173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.054207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.054321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.054353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.054483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.054516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.054624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.054656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.054846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.054878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.055153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.055187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.715 [2024-12-16 16:42:13.055421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.715 [2024-12-16 16:42:13.055453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.715 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.055692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.055723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.055891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.055924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.056196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.056229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.056431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.056463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.056646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.056678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.056862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.056893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.057012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.057044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.057277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.057311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.057481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.057513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.057627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.057658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.057777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.057809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.057991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.058215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.058254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.058374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.058406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.058667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.058699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.058902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.058934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.059037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.059069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.059277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.059311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.059573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.059605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.059798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.059830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.060044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.060076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.060279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.060312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.060438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.060470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.060643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.060674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.060914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.060947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.061059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.061091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.061363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.061396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.061576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.061608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.061777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.061809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.061996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.062028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.062259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.062294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.062430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.062462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.062646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.062676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.062845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.062876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.063002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.063034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.063227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.063260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.063442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.063474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.063744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.063776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.716 [2024-12-16 16:42:13.063968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.716 [2024-12-16 16:42:13.063999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.716 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.064179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.064370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.064402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.064625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.064804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.065022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.065053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.065237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.065269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.065518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.065550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.065736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.065767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.065936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.065967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.066091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.066133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.066317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.066349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.066503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.066775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.066807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.067001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.067039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.067309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.067343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.067446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.067481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.067672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.067704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.067937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.068126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.068160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.068264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.068296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.068546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.068578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.068827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.068860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.069073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.069112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.069304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.069336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.069560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.069741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.069773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.069960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.069993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.070120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.070154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.070328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.070360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.070625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.070657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.070776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.070808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.071049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.071082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.071212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.071245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.071366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.071399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.071511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.071543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.071810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.072092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.072135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.072303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.072335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.717 [2024-12-16 16:42:13.072467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.717 [2024-12-16 16:42:13.072499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.717 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.072767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.072799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.073018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.073051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.073175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.073208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.073381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.073413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.073581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.073614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.073852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.073885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.074051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.074083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.074353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.074387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.074624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.074656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.074929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.074961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.075178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.075212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.075455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.075487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.075665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.075697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.075893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.075925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.076189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.076228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.076333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.076366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.076537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.076569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.076749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.076781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.076977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.077009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.077198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.077231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.077354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.077386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.077649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.077682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.077798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.077830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.077953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.077985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.078088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.078130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.078404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.078435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.078623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.078847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.078880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.079055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.079088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.079311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.079344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.079550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.079583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.079695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.079727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.079930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.079963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.080163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.080197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.080368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.080401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.080592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.080625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.080859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.080891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.081021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.081054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.081189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.718 [2024-12-16 16:42:13.081222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.718 qpair failed and we were unable to recover it. 00:36:24.718 [2024-12-16 16:42:13.081396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.081428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.081550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.081583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.081770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.081803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.082008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.082271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.082305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.082415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.082447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.082567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.082600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.082798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.082830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.083020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.083051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.083305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.083338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.083584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.083618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.083750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.083782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.083974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.084006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.084287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.084322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.084519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.084552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.084734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.084943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.084976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.085156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.085190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.085305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.085338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.085538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.085570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.085704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.085737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.085945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.085977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.086105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.086138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.086377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.086410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.086687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.086719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.086861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.086893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.087166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.087200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.087332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.087364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.087573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.087605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.087741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.087773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.087893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.087926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.088040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.088072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.088271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.088304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.088422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.088456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.088626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.088659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.088832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.088864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.088995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.089029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.089210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.089244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.089356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.719 [2024-12-16 16:42:13.089389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.719 qpair failed and we were unable to recover it. 00:36:24.719 [2024-12-16 16:42:13.089501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.089534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.089715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.089747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.089927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.089959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.090220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.090275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.090472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.090504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.090620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.090652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.090887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.090920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.091092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.091137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.091252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.091284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.091415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.091448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.091623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.091656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.091769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.091801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.091913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.091945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.092122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.092155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.092358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.092390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.092519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.092551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.092743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.092782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.092904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.092937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.093115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.093150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.093342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.093374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.093631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.093662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.093924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.093956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.094212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.094245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.094348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.094380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.094547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.094578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.094754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.094785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.095022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.095054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.095249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.095281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.095403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.095434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.095550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.095581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.095766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.095798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.096036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.096067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.096259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.096291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.096398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.096430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.720 qpair failed and we were unable to recover it. 00:36:24.720 [2024-12-16 16:42:13.096533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.720 [2024-12-16 16:42:13.096564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.096684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.096715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.096842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.096874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.097042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.097073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.097254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.097286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.097395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.097427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.097646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.097819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.097850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.098040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.098072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.098334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.098405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.098560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.098594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.098852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.098885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.099005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.099036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.099284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.099317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.099589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.099620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.099806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.099838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.099956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.099988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.100173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.100206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.100381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.100413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.100568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.100679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.100880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.100912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.101034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.101073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.101349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.101382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.101566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.101598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.101790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.101822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.101995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.102026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.102208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.102241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.102417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.102449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.102707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.102739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.102878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.102910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.103035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.103066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.103179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.103484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.103515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.103626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.103658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.103842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.103874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.104050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.104081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.104199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.104230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.104395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.104427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.721 [2024-12-16 16:42:13.104677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.721 [2024-12-16 16:42:13.104708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.721 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.104917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.104949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.105051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.105082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.105254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.105285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.105478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.105508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.105626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.105657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.105773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.105803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.105974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.106248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.106281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.106411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.106441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.106678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.106747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.106882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.106917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.107024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.107056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.107312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.107346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.107482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.107514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.107687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.107718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.107908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.107939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.108067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.108110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.108226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.108257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.108455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.108486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.108591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.108623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.108804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.108835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.109005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.109036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.109249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.109291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.109491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.109522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.109693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.109724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.109834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.109866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.110070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.110351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.110383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.110611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.110642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.110750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.110781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.110966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.110997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.111180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.111213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.111449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.111480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.111655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.111687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.111863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.111894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.112115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.112148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.112364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.112396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.112570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.112601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.722 [2024-12-16 16:42:13.112717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.722 [2024-12-16 16:42:13.112748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.722 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.112944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.112976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.113107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.113138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.113258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.113290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.113525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.113556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.113729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.113759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.113892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.113924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.114053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.114085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.114372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.114406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.114594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.114625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.114829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.114860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.115090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.115132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.115333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.115362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.115607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.115639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.115907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.115939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.116043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.116073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.116349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.116381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.116502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.116533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.116653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.116684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.116871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.116902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.117079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.117119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.117282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.117449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.117480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.117595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.117626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.117816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.117853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.118063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.118204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.118237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.118423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.118454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.118560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.118592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.118767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.118797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.118970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.119001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.119236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.119270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.119391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.119422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.119525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.119557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.119726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.119758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.119884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.119915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.120045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.120077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.120264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.120296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.120473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.120505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.723 [2024-12-16 16:42:13.120699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.723 [2024-12-16 16:42:13.120731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.723 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.120837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.120868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.120984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.121015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.121275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.121308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.121442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.121474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.121671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.121702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.121819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.121850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.121951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.121983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.122152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.122184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.122352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.122384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.122517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.122549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.122734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.122765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.123010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.123042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.123177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.123209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.123466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.123497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.123608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.123639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.123880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.123912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.124086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.124128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.124315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.124346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.124485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.124516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.124706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.124738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.124863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.124894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.125061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.125093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.125321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.125353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.125569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.125600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.125773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.125810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.126017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.126049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.126196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.126229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.126362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.126394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.126580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.126611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.126736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.126767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.126876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.126907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.127039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.127071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.127284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.127316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.127445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.724 [2024-12-16 16:42:13.127477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.724 qpair failed and we were unable to recover it. 00:36:24.724 [2024-12-16 16:42:13.127585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.127616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.127813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.127844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.128026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.128058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.128173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.128205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.128412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.128444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.128570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.128601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.128782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.128813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.129083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.129246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.129443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.129581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.129736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.129980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.130011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.130136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.130169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.130273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.130305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.130473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.130505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.130790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.130861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.131131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.131168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.131412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.131444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.131560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.131591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.131781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.131812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.131928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.131958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.132198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.132232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.132349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.132379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.132491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.132522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.132701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.132733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.132923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.132954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.133076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.133119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.133456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.133496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.133738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.133769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.134023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.134054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.134292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.134324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.134440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.134472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.134582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.134611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.134787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.134818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.134990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.135020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.135147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.725 [2024-12-16 16:42:13.135180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.725 qpair failed and we were unable to recover it. 00:36:24.725 [2024-12-16 16:42:13.135360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.135390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.135646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.135678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.135973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.136004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.136194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.136227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.136437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.136630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.136662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.136794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.136824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.137005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.137037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.137189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.137221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.137390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.137420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.137665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.137799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.137829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.137928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.137960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.138136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.138168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.138338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.138369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.138503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.138535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.138648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.138680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.138796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.138827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.139037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.139069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.139284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.139393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.139424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.139598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.139629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.139878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.139909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.140029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.140060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.140255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.140472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.140502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.140670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.140701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.140807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.140838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.141014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.141044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.141185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.141218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.141341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.141372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.141484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.141516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.141715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.142013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.142044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.142314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.142348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.142466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.142498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.142670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.142701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.142885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.142917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.143158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.726 [2024-12-16 16:42:13.143191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.726 qpair failed and we were unable to recover it. 00:36:24.726 [2024-12-16 16:42:13.143374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.143404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.143574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.143606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.143731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.143762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.143895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.143927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.144100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.144132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.144341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.144373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.144551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.144582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.144704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.144736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.144907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.144938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.145112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.145145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.145330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.145361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.145500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.145531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.145773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.145803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.146067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.146107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.146385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.146417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.146528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.146559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.146818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.146848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.147031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.147061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.147193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.147225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.147417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.147454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.147644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.147674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.147914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.147946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.148059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.148090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.148292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.148323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.148539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.148570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.148761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.148794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.148967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.148997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.149210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.149244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.149354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.149385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.149643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.149675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.149786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.149818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.149996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.150028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.150142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.150174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.150392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.150424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.150598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.150629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.150754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.150785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.150998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.151166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.151198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.151390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.727 [2024-12-16 16:42:13.151420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.727 qpair failed and we were unable to recover it. 00:36:24.727 [2024-12-16 16:42:13.151608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.151639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.151822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.151852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.152042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.152073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.152202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.152234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.152489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.152521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.152635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.152667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.152865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.152896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.153083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.153125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.153342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.153373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.153490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.153521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.153730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.153761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.153885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.153915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.154139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.154173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.154356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.154387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.154574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.154606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.154869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.154901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.155070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.155112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.155242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.155273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.155448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.155479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.155665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.155696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.155800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.155838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.156080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.156328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.156361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.156535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.156566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.156829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.156860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.157027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.157059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.157169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.157201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.157324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.157355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.157540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.157570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.157809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.157840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.158032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.158063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.158294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.158325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.158447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.158479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.158676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.158707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.158912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.158943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.159125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.159158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.728 [2024-12-16 16:42:13.159281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.728 [2024-12-16 16:42:13.159312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.728 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.159496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.159527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.159718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.159748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.159937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.159969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.160231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.160264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.160437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.160467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.160585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.160616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.160800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.160831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.161101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.161134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.161371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.161402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.161581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.161613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.161791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.161822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.162007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.162038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.162339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.162372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.162488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.162519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.162633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.162664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.162837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.162869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.163114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.163334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.163365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.163555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.163586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.163717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.163748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.164013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.164045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.164222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.164253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.164445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.164477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.164659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.164697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.164815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.164846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.165363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.165395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.165566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.165598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.165716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.165748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.165856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.165887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.166140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.166174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.166363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.166394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.166583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.166614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.166846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.166878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.167078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.167117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.167306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.167337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.729 [2024-12-16 16:42:13.167516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.729 [2024-12-16 16:42:13.167547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.729 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.167721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.167752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.167965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.167996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.168268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.168300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.168412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.168444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.168613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.168644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.168763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.168793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.168979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.169010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.169192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.169224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.169342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.169372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.169542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.169573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.169739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.169769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.169949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.169980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.170186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.170218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.170336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.170544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.170575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.170762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.170793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.170976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.171007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.171139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.171171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.171288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.171324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.171565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.171597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.171701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.171733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.171874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.171905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.172113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.172145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.172320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.172352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.172590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.172622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.172757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.172788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.172963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.173000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.173183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.173216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.173406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.173438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.173554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.173585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.173791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.173822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.174013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.174045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.174252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.174284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.174401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.174433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.174625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.174656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.174842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.174873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.175131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.175165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.175271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.175302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.730 [2024-12-16 16:42:13.175413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.730 [2024-12-16 16:42:13.175446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.730 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.175646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.175676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.175866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.175898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.176034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.176064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.176248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.176280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.176395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.176426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.176687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.176717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.176825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.176856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.177029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.177060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.177247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.177280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.177409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.177440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.177634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.177666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.177797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.177828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.177955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.177986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.178177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.178216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.178346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.178615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.178647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.178835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.178866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.179059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.179091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.179273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.179305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.179490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.179521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.179701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.179732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.179899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.179931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.180131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.180164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.180405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.180436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.180537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.180569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.180751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.180782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.180981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.181112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.181151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.181352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.181383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.181555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.181585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.181851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.181882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.182119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.182152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.182360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.182391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.182592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.182828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.182859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.183108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.183140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.183254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.183285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.183414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.183444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.183680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.731 [2024-12-16 16:42:13.183712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.731 qpair failed and we were unable to recover it. 00:36:24.731 [2024-12-16 16:42:13.183968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.183998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.184135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.184167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.184411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.184443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.184700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.184731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.184967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.184998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.185167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.185200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.185400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.185430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.185597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.185628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.185895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.185926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.186163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.186195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.186330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.186361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.186495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.186525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.186714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.186746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.186951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.186982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.187171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.187203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.187465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.187497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.187670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.187701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.187953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.187985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.188167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.188199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.188330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.188360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.188531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.188563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.188754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.188784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.189020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.189052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.189333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.189366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.189601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.189632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.189839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.189890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.190084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.190145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.190335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.190367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.190569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.190607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.190726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.190756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.190932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.190964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.191146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.191363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.191394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.191507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.191539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.191801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.191833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.192077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.192115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.192353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.192384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.192640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.192672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.192790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.732 [2024-12-16 16:42:13.192821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.732 qpair failed and we were unable to recover it. 00:36:24.732 [2024-12-16 16:42:13.193020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.193052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.193186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.193217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.193383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.193415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.193532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.193563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.193742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.193774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.193987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.194017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.194139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.194175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.194377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.194409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.194590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.194622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.194743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.194775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.194966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.194997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.195119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.195151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.195388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.195419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.195608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.195640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.195902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.195933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.196146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.196179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.196461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.196492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.196608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.196640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.196918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.196949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.197129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.197161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.197397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.197585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.197617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.197832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.197863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.197989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.198020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.198203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.198235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.198494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.198524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.198628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.198659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.198833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.198864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.199050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.199082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.199262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.199299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.199559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.199591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.199693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.199724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.199985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.200017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.200293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.733 [2024-12-16 16:42:13.200326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.733 qpair failed and we were unable to recover it. 00:36:24.733 [2024-12-16 16:42:13.200586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.200618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.200799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.200831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.201012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.201042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.201244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.201277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.201471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.201502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.201684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.201715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.201913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.201944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.202048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.202080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.202283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.202314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.202493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.202525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.202661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.202691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.202870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.202901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.203167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.203199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.203328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.203359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.203533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.203564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.203755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.203787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.203974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.204005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.204246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.204279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.204543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.204574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.204706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.204738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.204908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.204938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.205115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.205148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.205408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.205477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.205777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.205846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.206057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.206110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.206258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.206292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.206551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.206582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.206820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.206852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.206983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.207014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.207196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.207229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.207499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.207531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.207739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.207769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.207946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.207978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.208161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.208194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.208391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.208422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.208588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.208628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.208808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.208839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.209020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.209051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.209317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.209350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.734 [2024-12-16 16:42:13.209516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.734 [2024-12-16 16:42:13.209546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.734 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.209806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.209836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.210107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.210140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.210321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.210351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.210484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.210516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.210719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.210749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.210954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.210984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.211171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.211204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.211442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.211473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.211639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.211670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.211846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.211878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.211991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.212022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.212145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.212177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.212354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.212384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.212646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.212677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.212800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.212831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.213042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.213073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.213220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.213253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.213362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.213392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.213650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.213682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.213807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.213838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.214039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.214071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.214318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.214351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.214515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.214583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.214841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.214876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.215119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.215388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.215420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.215617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.215648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.215888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.215919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.216041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.216073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.216340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.216373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.216540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.216571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.216685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.216716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.216954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.216986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.217155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.217189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.217297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.217328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.217577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.217618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.217748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.217778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.217980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.218011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.735 qpair failed and we were unable to recover it. 00:36:24.735 [2024-12-16 16:42:13.218218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.735 [2024-12-16 16:42:13.218250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.218425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.218456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.218575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.218607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.218810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.218841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.219045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.219077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.219207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.219240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.219461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.219491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.219613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.219644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.219812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.219844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.220027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.220058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.220256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.220287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.220473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.220506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.220713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.220744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.220912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.220943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.221117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.221151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.221333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.221364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.221565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.221597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.221845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.221876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.222064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.222107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.222291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.222322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.222440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.222472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.222599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.222631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.222838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.222869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.223036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.223066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.223198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.223232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.223370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.223507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.223538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.223717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.223749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.223873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.223903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.224019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.224051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.224306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.224339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.224524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.224556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.224774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.224986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.225018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.225199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.225232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.225494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.225525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.225726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.225758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.225942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.225979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.226244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.226278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.736 [2024-12-16 16:42:13.226411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.736 [2024-12-16 16:42:13.226444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.736 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.226563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.226593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.226724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.226756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.226875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.226906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.227074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.227114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.227327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.227358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.227645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.227677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.227851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.227882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.227997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.228029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.228211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.228242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.228431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.228463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.228671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.228702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.228886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.228918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.229090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.229135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.229373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.229405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.229584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.229615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.229864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.229895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.230156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.230189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.230375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.230406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.230614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.230645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.230824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.230856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.230989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.231168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.231201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.231378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.231409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.231531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.231562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.231734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.231804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.231959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.231993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.232269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.232302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.232485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.232517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.232712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.232743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.232863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.232894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.233132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.737 [2024-12-16 16:42:13.233165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.737 qpair failed and we were unable to recover it. 00:36:24.737 [2024-12-16 16:42:13.233299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.233330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.233455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.233487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.233659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.233690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.233948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.233979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.234105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.234138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.234313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.234344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.234534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.234573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.234833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.234865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.235053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.235084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.235209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.235241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.235372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.235402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.235663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.235693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.235812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.235843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.235965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.235996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.236181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.236214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.236319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.236351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.236539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.236571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.236757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.236789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.236899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.237171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.237203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.237385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.237417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.237611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.237643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.237859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.238009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.238306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.238477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.238508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.238748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.238778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.238887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.238918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.239112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.239145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.239313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.239345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.239602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.239634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.239872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.239904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.240091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.240134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.240279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.240312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.240487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.240519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.240635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.240667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.240879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.240910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.241107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.241138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.241255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.738 [2024-12-16 16:42:13.241286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.738 qpair failed and we were unable to recover it. 00:36:24.738 [2024-12-16 16:42:13.241410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.241440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.241620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.241650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.241892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.241923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.242047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.242078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.242323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.242356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.242526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.242557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.242703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.242808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.242844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.243039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.243070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.243375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.243406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.243521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.243552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.243666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.243696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.243889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.243920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.244043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.244073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.244298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.244330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.244517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.244548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.244807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.245018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.245049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.245233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.245266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.245441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.245472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.245647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.245677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.245852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.245883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.246133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.246165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.246426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.246457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.246639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.246671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.246852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.246883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.247009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.247039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.247220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.247252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.247427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.247459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.247665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.247696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.247800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.247831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.247950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.247982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.248187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.248219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.248413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.248444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.248698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.248768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.248899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.248935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.249181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.249216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.249410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.249443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.249555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.739 [2024-12-16 16:42:13.249587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.739 qpair failed and we were unable to recover it. 00:36:24.739 [2024-12-16 16:42:13.249706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.249737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.249928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.249959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.250151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.250185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.250368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.250400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.250608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.250639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.250814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.250845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.251087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.251134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.251265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.251297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.251560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.251591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.251787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.251819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.251999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.252030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.252211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.252245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.252420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.252451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.252707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.252738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.252856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.252886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.253158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.253191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.253358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.253390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.253514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.253545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.253660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.253691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.253870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.253903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.254078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.254120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.254306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.254338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.254531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.254568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.254747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.254779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.255050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.255080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.255298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.255330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.255509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.255540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.255671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.255702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.255873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.255904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.256090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.256135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.256343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.256375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.256574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.256605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.256720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.256751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.256851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.256882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.257121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.257154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.257457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.257489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.257627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.257659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.257864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.257895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.258069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.258117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.258363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.740 [2024-12-16 16:42:13.258394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.740 qpair failed and we were unable to recover it. 00:36:24.740 [2024-12-16 16:42:13.258518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.258550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.258760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.258790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.259029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.259060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.259320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.259352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.259619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.259650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.259828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.259858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.260029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.260060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.260259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.260290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.260503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.260533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.260798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.260830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.260971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.261003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.261119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.261152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.261281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.261313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.261518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.261551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.261814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.261844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.262021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.262052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.262183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.262215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.262337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.262368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.262606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.262637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.262825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.262856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.263056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.263086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.263411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.263659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.263691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.263867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.263903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.264145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.264177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.264290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.264322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.264491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.264522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.264787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.264818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.265008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.265041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.265211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.265243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.265475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.265507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.265617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.265647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.265771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.265802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.266015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.266046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.266234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.266388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.266420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.266601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.266632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.266838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.266869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.267053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.267083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.741 [2024-12-16 16:42:13.267224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.741 [2024-12-16 16:42:13.267253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.741 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.267487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.267519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.267702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.267732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.267968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.268000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.268132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.268166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.268347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.268378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.268554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.268584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.268752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.268783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.268963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.268994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.269142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.269426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.269456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.269566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.269601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.269792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.269824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.270062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.270240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.270272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.270447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.270476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.270737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.270769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.271033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.271064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.271176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.271207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.271314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.271343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.271509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.271540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.271713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.271744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.271988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.272017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.272292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.272326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.272493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.272523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.272787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.272820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.273019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.273051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.273264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.273295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.273464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.273495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.273682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.273713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.273817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.273847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.274084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.274136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.274257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.274287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.274406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.274438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.274620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.274652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.742 [2024-12-16 16:42:13.274889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.742 [2024-12-16 16:42:13.274920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.742 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.275183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.275215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.275349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.275379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.275585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.275615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.275763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.275795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.276001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.276032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.276214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.276247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.276486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.276516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.276629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.276660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.276846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.276875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.276979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.277010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.277221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.277255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.277444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.277475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.277728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.277760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.277866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.277896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.278156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.278189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.278395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.278427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.278541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.278577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.278688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.278719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.279002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.279031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.279228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.279259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.279452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.279482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.279599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.279631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.279840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.279872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.280041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.280072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.280260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.280372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.280404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.280524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.280554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.280732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.280761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.281020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.281188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.281219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.281364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.281396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.281510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.281539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.281740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.281772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.281874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.281903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.282034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.282064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.282331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.282368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.282632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.282663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.282784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.282814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.282937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.282967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.743 [2024-12-16 16:42:13.283143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.743 [2024-12-16 16:42:13.283175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.743 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.283357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.283389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.283567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.283598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.283838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.283870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.284052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.284089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.284226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.284257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.284438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.284468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.284732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.284765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.284938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.284968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.285159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.285191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.285307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.285337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.285600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.285631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:24.744 qpair failed and we were unable to recover it. 00:36:24.744 [2024-12-16 16:42:13.285800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:24.744 [2024-12-16 16:42:13.285831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.285933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.286081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.286121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.286394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.286424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.286663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.286695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.286879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.286909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.287151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.287184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.287378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.287407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.287671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.287701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.287887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.287916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.288035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.288067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.288301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.288332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.288521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.288552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.288733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.288765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.288951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.288982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.289167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.289199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.289438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.289470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.289675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.289706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.289828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.289859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.289978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.290008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.290216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.290249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.019 [2024-12-16 16:42:13.290488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.019 [2024-12-16 16:42:13.290521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.019 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.290713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.290742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.290933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.290964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.291133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.291167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.291277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.291306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.291481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.291511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.291697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.291728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.291843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.291874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.292063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.292103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.292281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.292312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.292496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.292528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.292739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.292771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.292914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.292951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.293055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.293084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.293267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.293298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.293539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.293570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.293743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.294037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.294305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.294337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.294540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.294571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.294763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.294794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.295017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.295048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.295246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.295278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.295468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.295500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.295678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.295707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.295904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.295936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.296138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.296170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.296281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.296313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.296485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.296517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.296759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.296790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.297036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.297067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.297236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.297267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.020 [2024-12-16 16:42:13.297506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.020 [2024-12-16 16:42:13.297536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.020 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.297752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.297782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.297904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.297935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.298134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.298167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.298296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.298329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.298502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.298532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.298658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.298688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.298945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.298982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.299124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.299157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.299276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.299305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.299475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.299506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.299765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.299796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.299920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.299951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.300121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.300153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.300410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.300440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.300615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.300647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.300833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.300862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.300980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.301010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.301205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.301238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.301428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.301459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.301602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.301632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.301764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.301797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.301965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.301995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.302198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.302228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.302410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.302439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.302547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.302578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.302748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.302778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.302894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.302924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.303122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.303154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.303361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.303392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.303597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.303628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.303863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.021 [2024-12-16 16:42:13.303894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.021 qpair failed and we were unable to recover it. 00:36:25.021 [2024-12-16 16:42:13.304021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.304050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.304175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.304206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.304472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.304503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.304626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.304658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.304852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.304882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.305120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.305152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.305324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.305354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.305542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.305571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.305746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.305776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.305968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.305999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.306264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.306296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.306541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.306572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.306795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.306825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.307019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.307050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.307229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.307261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.307429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.307460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.307675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.307712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.307926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.307956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.308220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.308252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.308449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.308479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.308616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.308646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.308826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.308855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.309060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.309091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.309222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.309253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.309421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.309451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.309622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.309651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.309829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.309859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.309976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.310006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.310194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.310226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.310442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.310472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.310666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.310696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.310885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.310916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.022 qpair failed and we were unable to recover it. 00:36:25.022 [2024-12-16 16:42:13.311054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.022 [2024-12-16 16:42:13.311084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.311265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.311297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.311410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.311440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.311687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.311718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.311915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.311946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.312074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.312112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.312354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.312384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.312643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.312673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.312874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.312905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.313093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.313132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.313336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.313367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.313553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.313583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.313824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.313854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.314042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.314072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.314332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.314362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.314554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.314584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.314782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.314812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.314982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.315014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.315187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.315224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.315346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.315376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.315551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.315581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.315768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.315799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.316036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.316066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.316269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.316302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.316417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.316446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.316655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.316685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.316868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.316898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.317079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.317123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.317365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.317397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.317678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.317708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.317825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.023 [2024-12-16 16:42:13.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.023 qpair failed and we were unable to recover it. 00:36:25.023 [2024-12-16 16:42:13.318044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.318075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.318316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.318349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.318551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.318581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.318821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.318852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.319033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.319063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.319333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.319366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.319495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.319525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.319628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.319658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.319788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.319819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.319941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.319972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.320152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.320186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.320423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.320454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.320637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.320666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.320844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.320874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.321048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.321078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.321219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.321250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.321381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.321410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.321534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.321566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.321744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.321775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.321916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.321946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.322123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.322154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.322348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.322385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.322556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.322587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.322765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.322796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.024 qpair failed and we were unable to recover it. 00:36:25.024 [2024-12-16 16:42:13.322914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.024 [2024-12-16 16:42:13.322944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.323058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.323088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.323341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.323373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.323488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.323518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.323698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.323728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.323912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.323945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.324062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.324092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.324221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.324251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.324383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.324414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.324600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.324631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.324736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.324766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.324981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.325012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.325181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.325214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.325427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.325457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.325718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.325749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.325866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.325896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.326159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.326190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.326297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.326326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.326510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.326540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.326660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.326691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.326890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.326921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.327043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.327072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.327321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.327354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.327618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.327648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.327880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.327912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.328191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.328224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.328395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.328426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.328569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.328599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.328790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.328821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.329001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.329221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.329253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.025 [2024-12-16 16:42:13.329456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.025 [2024-12-16 16:42:13.329488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.025 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.329602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.329633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.329844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.329875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.330064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.330101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.330295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.330325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.330502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.330533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.330665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.330696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.330969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.331006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.331275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.331306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.331478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.331508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.331611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.331641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.331823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.331853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.332113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.332144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.332402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.332433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.332546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.332575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.332693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.332723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.332987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.333018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.333268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.333301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.333542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.333573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.333690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.333720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.333933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.333964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.334153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.334184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.334370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.334400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.334572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.334603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.334851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.334882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.334992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.335023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.335128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.335159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.335327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.335358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.335540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.335571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.335750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.335781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.335958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.335988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.026 qpair failed and we were unable to recover it. 00:36:25.026 [2024-12-16 16:42:13.336225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.026 [2024-12-16 16:42:13.336256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.336367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.336398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.336591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.336623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.336725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.336760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.336878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.336909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.337078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.337118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.337231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.337262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.337456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.337487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.337684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.337716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.337895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.337927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.338140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.338173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.338367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.338399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.338592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.338623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.338876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.338906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.339080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.339119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.339226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.339255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.339429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.339460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.339592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.339621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.339876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.339907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.340149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.340181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.340349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.340380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.340620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.340651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.340863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.340892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.341130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.341161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.341278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.341308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.341510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.341541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.341783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.341813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.341929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.341960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.342151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.342183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.342374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.342403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.027 [2024-12-16 16:42:13.342534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.027 [2024-12-16 16:42:13.342566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.027 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.342742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.342773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.342962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.342994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.343200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.343232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.343420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.343450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.343635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.343667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.343838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.343869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.344121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.344153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.344337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.344368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.344550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.344581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.344694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.344724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.344840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.344871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.345114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.345146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.345409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.345440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.345679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.345714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.345852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.345884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.346132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.346164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.346425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.346458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.346583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.346614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.346719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.346749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.346914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.346946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.347058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.347088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.347407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.347439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.347544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.347574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.347840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.347870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.348057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.348087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.348294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.348326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.348564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.348596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.348784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.348815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.349077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.349119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.349240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.028 [2024-12-16 16:42:13.349270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.028 qpair failed and we were unable to recover it. 00:36:25.028 [2024-12-16 16:42:13.349535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.349567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.349757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.349788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.350026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.350057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.350262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.350294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.350558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.350590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.350830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.350859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.351117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.351150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.351284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.351316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.351419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.351449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.351630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.351660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.351898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.351937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.352112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.352145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.352323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.352353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.352467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.352497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.352691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.352723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.352913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.352943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.353060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.353091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.353270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.353300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.353420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.353450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.353633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.353664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.353954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.353985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.354153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.354184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.354369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.354399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.354587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.354617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.354731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.354762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.355004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.355035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.355260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.355293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.355484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.355514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.355815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.355847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.029 qpair failed and we were unable to recover it. 00:36:25.029 [2024-12-16 16:42:13.356082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.029 [2024-12-16 16:42:13.356124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.356242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.356273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.356476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.356506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.356717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.356748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.357001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.357032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.357240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.357271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.357443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.357474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.357732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.357763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.357952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.358255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.358288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.358463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.358493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.358735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.358766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.359003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.359033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.359221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.359253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.359441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.359472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.359654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.359686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.359894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.359925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.360100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.360132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.360266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.360296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.360503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.360534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.360662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.360692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.360877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.360906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.030 [2024-12-16 16:42:13.361159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.030 [2024-12-16 16:42:13.361196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.030 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.361372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.361403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.361523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.361553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.361686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.361717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.361887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.361917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.362023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.362053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.362244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.362277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.362382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.362412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.362532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.362563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.362692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.362722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.362821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.362852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.363023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.363054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.363250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.363282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.363389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.363419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.363691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.363722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.363845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.363876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.364080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.364119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.364289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.364321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.364534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.364564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.364698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.364728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.364958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.364989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.365252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.365285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.365401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.365616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.365646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.365910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.365941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.366122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.366154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.366343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.366373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.366547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.366583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.366703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.031 [2024-12-16 16:42:13.366734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.031 qpair failed and we were unable to recover it. 00:36:25.031 [2024-12-16 16:42:13.366902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.366933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.367108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.367282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.367311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.367496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.367779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.367810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.367979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.368009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.368213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.368385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.368415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.368651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.368681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.368938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.368969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.369071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.369110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.369288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.369319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.369500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.369531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.369711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.369743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.369915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.369945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.370059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.370088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.370399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.370431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.370620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.370651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.370820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.370850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.371063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.371104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.371241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.371272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.371462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.371493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.371699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.371728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.371992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.372023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.372280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.372312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.372501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.372531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.372756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.372787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.372913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.372943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.032 [2024-12-16 16:42:13.373111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.032 [2024-12-16 16:42:13.373143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.032 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.373369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.373539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.373569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.373829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.373859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.373961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.373992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.374119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.374151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.374353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.374383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.374673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.374704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.374818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.374849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.374964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.374993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.375198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.375231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.375673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.375716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.375902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.375936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.376132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.376166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.376357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.376387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.376572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.376601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.376717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.376745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.376955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.376984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.377159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.377189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.377359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.377389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.377510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.377800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.377828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.378085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.378122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.378304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.378332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.378595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.378623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.378812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.378841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.379055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.379083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.379212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.379241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.379479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.379508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.379617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.379646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.379904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.379934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.380116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.380147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.380337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.380365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.380533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.380563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.380683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.380713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.380896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.380925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.381162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.381192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.381364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.381392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.033 [2024-12-16 16:42:13.381504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.033 [2024-12-16 16:42:13.381532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.033 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.381702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.381733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.381864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.381893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.382141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.382171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.382453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.382482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.382735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.382967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.382996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.383196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.383228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.383394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.383422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.383616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.383655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.383900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.383928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.384208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.384238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.384429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.384457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.384639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.384667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.384886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.384915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.385109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.385139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.385334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.385363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.385536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.385565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.385737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.385767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.385937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.385966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.386148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.386179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.386438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.386467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.386590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.386618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.386806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.386836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.386947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.386975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.387110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.387143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.387390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.387420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.387545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.387575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.387767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.387797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.387981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.034 [2024-12-16 16:42:13.388143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.034 [2024-12-16 16:42:13.388173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.034 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.388374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.388404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.388527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.388556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.388673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.388701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.388904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.388934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.389046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.389076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.389280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.389310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.389421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.389450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.389552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.389581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.389757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.389785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.389892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.389920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.390131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.390172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.390279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.390307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.390504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.390533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.390791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.390819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.390933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.390961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.391106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.391139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.391315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.391344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.391582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.391610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.391716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.391745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.391925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.391954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.392137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.392168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.392447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.392476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.392586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.392616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.392794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.392822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.393086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.393126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.393401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.393431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.393621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.393652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.393843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.393872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.394118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.394150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.394343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.394372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.394478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.394506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.394766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.394795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.395030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.395059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.035 [2024-12-16 16:42:13.395305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.035 [2024-12-16 16:42:13.395334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.035 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.395598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.395626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.395890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.395919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.396086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.396121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.396312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.396341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.396466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.396496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.396617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.396646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.396892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.396923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.397162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.397193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.397325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.397356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.397482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.397510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.397703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.397732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.397854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.397881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.397998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.398026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.398241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.398273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.398378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.398408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.398577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.398605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.398717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.398745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.399011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.399079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.399300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.399334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.399534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.399565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.399747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.399777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.399987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.400018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.400233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.400266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.400380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.400410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.400524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.400553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.400732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.400763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.400960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.036 [2024-12-16 16:42:13.400990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.036 qpair failed and we were unable to recover it. 00:36:25.036 [2024-12-16 16:42:13.401161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.401192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.401316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.401346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.401483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.401514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.401646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.401675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.401802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.401833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.402025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.402055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.402198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.402244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.402417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.402447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.402637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.402667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.402787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.402817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.403013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.403042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.403261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.403385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.403414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.403584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.403613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.403784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.403813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.403992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.404022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.404220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.404251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.404443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.404474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.404664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.404693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.404879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.404909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.405157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.405188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.405304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.405334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.405589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.405619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.560268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.560331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.560608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.560641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.560913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.560944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.561081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.561135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.561312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.561341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.561539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.561569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.561800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.561829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.561978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.562017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.562256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.562287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.562476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.562506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.562694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.562726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.562970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.563000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.037 [2024-12-16 16:42:13.563199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.037 [2024-12-16 16:42:13.563231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.037 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.563496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.563527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.563727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.563758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.563957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.563988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.564157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.564190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.564465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.564504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.564764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.564795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.564930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.564961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.565152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.565185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.565381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.565412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.565613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.565644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.565815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.565846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.566055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.566086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.566289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.566321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.566505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.566537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.566677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.566707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.566891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.566923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.567039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.567070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.567249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.567281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.567401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.567432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.567601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.567749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.567780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.568001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.568033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.568290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.568323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.568500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.568531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.568718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.568748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.568985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.569017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.569222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.569258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.569436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.569467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.569639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.569671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.569846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.569877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.570086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.570127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.570243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.570275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.570510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.038 [2024-12-16 16:42:13.570541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.038 qpair failed and we were unable to recover it. 00:36:25.038 [2024-12-16 16:42:13.570801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.570832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.571004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.571040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.571324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.571356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.571479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.571638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.571669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.571929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.571961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.572224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.572258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.572544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.572575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.572769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.572800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.572991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.573023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.573159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.573191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.573326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.573357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.573529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.573560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.573680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.573710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.573819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.573851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.574070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.574112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.574351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.574552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.574583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.574772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.574805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.574904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.574935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.575047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.575077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.575297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.575329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.575517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.575548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.575760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.575791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.575918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.575948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.576065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.576109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.576224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.576255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.576426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.576457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.576579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.576610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.576872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.576904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.577083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.577126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.577267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.577298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.577502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.577534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.577648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.577679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.039 [2024-12-16 16:42:13.577871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.039 [2024-12-16 16:42:13.577903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.039 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.578076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.578116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.578378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.578410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.578610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.578641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.578762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.578793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.578977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.579008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.579191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.579224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.579477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.579515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.579723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.579755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.579939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.579970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.580141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.580175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.580388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.580419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.580677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.580708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.580814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.580845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.581030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.581062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.581284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.581316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.581487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.581518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.581693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.581724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.581937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.581968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.582208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.582240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.582440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.582471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.582669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.582701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.582882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.582912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.583182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.583214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.583402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.583434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.583607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.583638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.583901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.583932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.584142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.584176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.584306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.584337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.584470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.584501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.584752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.584783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.584957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.584987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.585227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.585260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.585465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.585496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.585758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.585790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.585994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.586025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.586213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.586246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.040 qpair failed and we were unable to recover it. 00:36:25.040 [2024-12-16 16:42:13.586349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.040 [2024-12-16 16:42:13.586379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.586637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.586668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.586869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.586900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.587092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.587131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.587269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.587299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.587425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.587456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.587592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.587824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.587855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.588023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.588054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.588292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.588326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.588508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.588545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.588731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.588762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.588974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.589004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.589123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.589156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.589329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.589360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.589474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.589505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.589691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.589722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.589894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.589924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.590105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.590137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.590381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.590411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.590595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.590627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.590820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.590851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.591091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.591132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.591339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.591509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.591541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.041 [2024-12-16 16:42:13.591764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.041 qpair failed and we were unable to recover it. 00:36:25.041 [2024-12-16 16:42:13.591949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.591981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.592275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.592308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.592552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.592583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.592791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.592823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.593024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.593054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.593303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.593336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.593516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.593548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.593685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.593717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.593834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.593865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.594040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.594072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.594192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.594223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.594470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.594503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.594628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.594659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.594831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.594862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.595029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.595060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.595251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.595285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.595400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.595430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.595534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.595564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.595806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.595838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.595951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.595982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.596144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.596178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.596350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.596381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.596558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.596589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.596690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.596720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.596918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.596955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.597068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.597318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.597351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.597526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.597557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.597724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.597754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.597944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.597976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.598226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.598363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.598394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.598568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.598599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.598897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.598928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.599116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.599149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.599338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.599369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.042 [2024-12-16 16:42:13.599588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.042 qpair failed and we were unable to recover it. 00:36:25.042 [2024-12-16 16:42:13.599824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.599856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.599990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.600020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.600211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.600244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.600358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.600389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.600574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.600605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.600725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.600756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.600994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.601025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.601264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.601297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.601423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.601454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.601635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.601666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.601852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.601883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.601990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.602021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.602188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.602221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.602418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.602452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.602570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.602601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.602728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.602760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.602868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.602899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.603062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.603101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.603216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.603248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.603414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.603446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.603617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.603648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.603819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.603849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.604087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.604128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.604300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.604331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.604598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.604629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.604799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.604830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.605017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.605049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.605326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.605364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.605480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.605512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.605753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.605784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.605969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.606000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.606117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.606149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.606355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.606399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.606691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.606746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.607008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.607040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.607193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.607226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.607478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.607513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.607699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.607730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.043 [2024-12-16 16:42:13.607834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.043 [2024-12-16 16:42:13.607865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.043 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.608131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.608165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.608350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.608381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.608606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.608650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.608791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.608838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.609046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.609078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.609217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.609249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.609425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.609647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.609679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.609861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.609892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.610080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.610122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.610280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.610313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.610504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.610535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.044 [2024-12-16 16:42:13.610710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.044 [2024-12-16 16:42:13.610750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.044 qpair failed and we were unable to recover it. 00:36:25.323 [2024-12-16 16:42:13.610952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.611009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.611296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.611362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.611604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.611656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.611868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.611910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.612179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.612227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.612359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.612392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.612579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.612614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.612767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.612919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.612963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.613195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.613237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.613450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.613483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.613587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.613618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.613880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.613915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.614088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.614130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.614325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.614357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.614547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.614587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.614711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.614743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.614921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.614954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.615141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.615174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.615276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.615566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.615598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.615792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.615823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.616025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.616057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.616284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.616318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.616577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.616608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.616730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.616761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.616963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.616995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.617164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.617198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.617458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.617490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.617757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.617789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.618058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.618089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.618339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.618371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.618604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.618636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.618853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.618885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.619125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.619157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.619421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.619452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.619689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.619722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.324 [2024-12-16 16:42:13.619987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.324 [2024-12-16 16:42:13.620019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.324 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.620187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.620220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.620396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.620428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.620547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.620578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.620679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.620711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.621019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.621089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.621304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.621341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.621614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.621646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.621829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.621860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.622110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.622143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.622406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.622438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.622609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.622640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.622756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.622786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.622957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.622988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.623158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.623191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.623429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.623460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.623594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.623626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.623795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.623826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.624028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.624074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.624269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.624301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.624481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.624513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.624704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.624735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.624917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.624948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.625120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.625152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.625344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.625376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.625589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.625620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.625830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.625861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.626036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.626066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.626265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.626297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.626424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.626454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.626574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.626605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.626797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.626827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.627007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.627039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.627225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.627256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.627368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.627399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.627602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.627633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.627814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.627846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.628033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.628063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.628252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.628289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.325 qpair failed and we were unable to recover it. 00:36:25.325 [2024-12-16 16:42:13.628485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.325 [2024-12-16 16:42:13.628517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.628688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.628719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.628908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.628939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.629059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.629091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.629288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.629320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.629510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.629542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.629731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.629763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.629880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.629911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.630181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.630214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.630348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.630380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.630506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.630537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.630726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.630758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.630946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.630978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.631165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.631197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.631407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.631439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.631639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.631671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.631781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.631813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.632073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.632112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.632232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.632264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.632453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.632490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.632621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.632653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.632754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.632785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.632968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.633000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.633188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.633222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.633412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.633443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.633622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.633654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.633894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.633925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.634115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.634158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.634288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.634320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.634437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.634468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.634675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.634706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.634881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.634912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.635110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.635143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.635267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.635300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.635404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.635435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.635600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.635632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.635756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.635788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.635981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.636013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.636274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.636307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.326 qpair failed and we were unable to recover it. 00:36:25.326 [2024-12-16 16:42:13.636570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.326 [2024-12-16 16:42:13.636602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.636722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.636754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.636937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.636970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.637107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.637141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.637262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.637294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.637496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.637527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.637642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.637672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.637813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.637845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.637964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.637995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.638116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.638148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.638275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.638308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.638420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.638450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.638652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.638683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.638876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.638907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.639015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.639046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.639235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.639267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.639469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.639500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.639669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.639700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.639829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.639861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.640038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.640069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.640267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.640305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.640433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.640465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.640656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.640687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.640859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.640890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.640999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.641031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.641229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.641263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.641439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.641471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.641642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.641673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.641773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.641803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.641994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.642025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.642205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.642237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.642357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.642388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.642503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.642533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.642704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.642735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.642997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.643029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.643266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.643299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.643585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.643615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.643734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.643764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.644033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.644064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.327 qpair failed and we were unable to recover it. 00:36:25.327 [2024-12-16 16:42:13.644330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.327 [2024-12-16 16:42:13.644362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.644600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.644631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.644745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.644776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.644950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.644981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.645187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.645220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.645489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.645519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.645708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.645740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.645859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.645889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.646014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.646046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.646231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.646263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.646439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.646469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.646729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.646760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.646891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.646922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.647117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.647148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.647333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.647364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.647481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.647512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.647635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.647667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.647780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.647811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.647925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.647956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.648065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.648118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.648319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.648351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.648450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.648487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.648599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.648631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.648823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.648854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.649036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.649067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.649220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.649392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.649423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.649524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.649554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.649744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.649774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.649913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.649944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.650133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.650167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.650342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.650373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.650544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.328 [2024-12-16 16:42:13.650576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.328 qpair failed and we were unable to recover it. 00:36:25.328 [2024-12-16 16:42:13.650781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.650812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.650998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.651029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.651248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.651280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.651461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.651493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.651604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.651635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.651880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.651911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.652014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.652046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.652293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.652325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.652534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.652565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.652697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.652729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.652973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.653004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.653123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.653156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.653274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.653306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.653498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.653529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.653707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.653738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.653941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.653988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.654148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.654419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.654458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.654654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.654685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.654812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.654845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.655026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.655057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.655280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.655312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.655436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.655467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.655708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.655739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.655921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.655953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.656140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.656173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.656307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.656338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.656508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.656540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.656723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.656761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.656953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.656985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.657267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.657300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.657523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.657554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.657738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.657769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.658014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.658045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.658158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.658190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.658374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.658405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.658592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.658624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.658828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.658858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.659046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.329 [2024-12-16 16:42:13.659078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.329 qpair failed and we were unable to recover it. 00:36:25.329 [2024-12-16 16:42:13.659338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.659369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.659637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.659669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.659867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.660134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.660169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.660429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.660461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.660579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.660610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.660734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.660765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.660947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.660978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.661093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.661142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.661404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.661436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.661674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.661705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.661945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.661977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.662164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.662197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.662384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.662415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.662621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.662652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.662834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.662865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.663113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.663147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.663421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.663451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.663643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.663675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.663863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.663895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.664136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.664168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.664433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.664463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.664596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.664628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.664862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.664893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.665090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.665149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.665342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.665374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.665543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.665574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.665746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.665778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.665904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.665935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.666140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.666173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.666287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.666318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.666448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.666480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.666648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.666679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.666861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.666893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.667068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.667107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.667279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.667310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.667496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.667527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.667647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.667677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.667862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.667895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.330 qpair failed and we were unable to recover it. 00:36:25.330 [2024-12-16 16:42:13.667997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.330 [2024-12-16 16:42:13.668029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.668289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.668323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.668444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.668475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.668683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.668714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.668894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.668926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.669177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.669209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.669329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.669360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.669481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.669513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.669718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.669749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.669929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.669961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.670220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.670253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.670441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.670473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.670587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.670619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.670800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.671017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.671050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.671242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.671274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.671513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.671544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.671805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.671842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.672050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.672081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.672215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.672246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.672513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.672546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.672736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.672768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.672874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.672905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.673007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.673038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.673292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.673325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.673586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.673618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.673816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.673848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.673966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.673997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.674187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.674220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.674407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.674439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.674684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.674715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.674889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.674921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.675043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.675322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.675354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.675530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.675569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.675751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.675783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.675963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.675995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.676233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.676266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.676505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.676536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.331 [2024-12-16 16:42:13.676792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.331 [2024-12-16 16:42:13.676823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.331 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.676992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.677024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.677144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.677189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.677293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.677324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.677510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.677541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.677814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.677846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.678045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.678077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.678210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.678241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.678368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.678401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.678656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.678688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.678871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.678902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.679137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.679169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.679352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.679384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.679553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.679585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.679724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.679754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.679993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.680025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.680288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.680320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.680422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.680454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.680691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.680728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.680853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.680883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.681056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.681088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.681281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.681313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.681508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.681540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.681658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.681690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.681952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.681983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.682152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.682184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.682370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.682402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.682608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.682639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.682822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.682854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.683022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.683055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.683256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.683289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.683405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.683437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.683560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.683592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.683771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.683802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.683995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.332 [2024-12-16 16:42:13.684027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.332 qpair failed and we were unable to recover it. 00:36:25.332 [2024-12-16 16:42:13.684146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.684180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.684295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.684326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.684479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.684650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.684682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.684804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.684835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.685045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.685076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.685361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.685392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.685498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.685530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.685688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.685890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.685921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.686170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.686204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.686319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.686351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.686524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.686555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.686743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.686953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.686984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.687174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.687206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.687380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.687411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.687675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.687707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.687889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.687920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.688090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.688132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.688330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.688362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.688547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.688578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.688751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.688782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.688906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.688943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.689077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.689136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.689321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.689353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.689527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.689557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.689679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.689711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.689878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.689910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.690011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.690043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.690225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.690257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.690428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.690459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.690643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.690674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.690899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.690931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.691051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.691081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.691212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.691244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.691423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.691455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.691650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.691682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.333 [2024-12-16 16:42:13.691852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.333 [2024-12-16 16:42:13.691884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.333 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.692043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.692183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.692216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.692425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.692456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.692574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.692605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.692842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.692873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.693043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.693203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.693357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.693499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.693654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.693793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.693990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.694021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.694142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.694449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.694480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.694602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.694633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.694752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.694783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.694887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.694918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.695090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.695128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.695267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.695298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.695402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.695433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.695617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.695647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.695885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.696089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.696128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.696311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.696342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.696446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.696483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.696593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.696624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.696730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.696761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.696870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.696901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.697165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.697197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.697438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.697470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.697591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.697622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.697798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.697830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.698009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.698040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.698231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.698265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.698501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.698533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.698666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.698697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.698827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.698859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.698976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.699008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.699144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.334 [2024-12-16 16:42:13.699178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.334 qpair failed and we were unable to recover it. 00:36:25.334 [2024-12-16 16:42:13.699308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.699338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.699446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.699478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.699656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.699688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.699899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.699930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.700112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.700143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.700260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.700292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.700473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.700505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.700676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.700707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.700820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.700851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.701042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.701073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.701215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.701247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.701348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.701379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.701568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.701601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.701716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.701747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.701943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.701975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.702114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.702147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.702313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.702345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.702526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.702558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.702745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.702776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.702989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.703020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.703207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.703239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.703482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.703514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.703644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.703675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.703916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.703946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.704062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.704100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.704359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.704395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.704637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.704669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.704783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.704814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.705055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.705087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.705229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.705261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.705373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.705404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.705585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.705617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.705805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.705836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.706024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.706054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.706191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.706224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.706342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.706374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.706555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.706586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.706772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.706803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.706984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.335 [2024-12-16 16:42:13.707015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.335 qpair failed and we were unable to recover it. 00:36:25.335 [2024-12-16 16:42:13.707192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.707225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.707393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.707424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.707602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.707635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.707806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.707836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.708020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.708052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.708199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.708232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.708487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.708518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.708756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.708787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.708962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.708994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.709234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.709267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.709440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.709472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.709656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.709686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.709787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.709818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.710078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.710118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.710355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.710386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.710487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.710518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.710636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.710667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.710787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.710818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.710996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.711027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.711136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.711169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.711285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.711316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.711527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.711558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.711661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.711691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.711873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.711905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.712076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.712115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.712322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.712353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.712567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.712604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.712771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.712801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.712923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.712954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.713216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.713248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.713358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.713390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.713494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.713524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.713697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.713728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.713827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.713862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.714061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.714093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.714304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.714334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.714553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.714747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.714778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.714964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.714996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.336 qpair failed and we were unable to recover it. 00:36:25.336 [2024-12-16 16:42:13.715266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.336 [2024-12-16 16:42:13.715299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.715492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.715523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.715640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.715672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.715849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.715879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.716127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.716159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.716348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.716378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.716643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.716799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.716831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.716963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.716994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.717234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.717267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.717503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.717534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.717726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.717757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.717890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.717921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.718040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.718071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.718273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.718305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.718550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.718582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.718753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.718784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.718905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.718937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.719120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.719153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.719347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.719378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.719494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.719525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.719697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.719728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.719864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.719895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.720000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.720032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.720236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.720268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.720450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.720481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.720616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.720648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.720854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.720891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.721138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.721171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.721308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.721339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.721462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.721494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.721614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.721646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.721840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.721871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.722138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.722171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.722359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.337 [2024-12-16 16:42:13.722391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.337 qpair failed and we were unable to recover it. 00:36:25.337 [2024-12-16 16:42:13.722654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.722685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.722800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.722831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.722936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.722967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.723147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.723180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.723400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.723431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.723607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.723638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.723813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.723844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.724079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.724119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.724238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.724269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.724508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.724540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.724722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.724753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.724937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.724967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.725120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.725153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.725444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.725475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.725686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.725716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.725887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.725918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.726118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.726150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.726392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.726423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.726656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.726687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.726860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.726892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.727075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.727118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.727288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.727318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.727435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.727466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.727636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.727667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.727799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.727829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.727956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.727987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.728159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.728192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.728367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.728397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.728602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.728633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.728740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.728771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.728939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.728969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.729229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.729260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.729499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.729536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.729673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.729704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.729956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.729987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.730087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.730127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.730393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.730424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.730592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.730622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.730790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.338 [2024-12-16 16:42:13.730822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.338 qpair failed and we were unable to recover it. 00:36:25.338 [2024-12-16 16:42:13.730989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.731019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.731332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.731364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.731552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.731583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.731800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.731977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.732008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.732113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.732146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.732281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.732313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.732490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.732521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.732653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.732683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.732854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.732885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.733073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.733111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.733371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.733402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.733582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.733613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.733780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.733811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.733999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.734029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.734201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.734233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.734347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.734377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.734508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.734539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.734646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.734677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.734861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.734892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.735067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.735107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.735233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.735264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.735502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.735533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.735635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.735665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.735778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.735810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.735999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.736030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.736295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.736328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.736457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.736488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.736673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.736704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.736812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.736842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.737034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.737069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.737281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.737314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.737499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.737529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.737735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.737773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.737894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.737925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.738134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.738166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.738334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.738366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.738647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.738679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.738866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.339 [2024-12-16 16:42:13.738897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.339 qpair failed and we were unable to recover it. 00:36:25.339 [2024-12-16 16:42:13.739067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.739105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.739277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.739308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.739476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.739507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.739745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.739776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.740013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.740045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.740224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.740257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.740442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.740473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.740677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.740709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.740838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.740869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.741049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.741081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.741281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.741312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.741438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.741469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.741736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.741767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.741880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.741912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.742108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.742141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.742311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.742344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.742581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.742612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.742720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.742752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.742941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.742972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.743162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.743194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.743322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.743354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.743507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.743578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.743790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.743824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.744045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.744298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.744332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.744524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.744555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.744765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.744797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.745053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.745084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.745276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.745309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.745573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.745604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.745814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.745847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.745958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.745988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.746167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.746201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.746388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.746420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.746604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.746635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.746770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.746803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.746929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.746959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.747176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.747208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.747400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.747431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.340 [2024-12-16 16:42:13.747535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.340 [2024-12-16 16:42:13.747566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.340 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.747678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.747709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.747838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.747869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.748138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.748170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.748341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.748372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.748550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.748582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.748817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.748847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.748971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.749001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.749196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.749229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.749354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.749650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.749683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.749863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.749894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.750008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.750038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.750252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.750283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.750525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.750556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.750737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.750768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.750939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.750971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.751238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.751271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.751459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.751490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.751729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.751761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.751933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.751964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.752079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.752117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.752290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.752321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.752501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.752533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.752744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.752943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.752973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.753208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.753382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.753411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.753512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.753543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.753806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.753837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.753956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.753987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.754113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.754144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.754381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.754412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.754671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.754702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.754814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.754845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.755018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.341 [2024-12-16 16:42:13.755049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.341 qpair failed and we were unable to recover it. 00:36:25.341 [2024-12-16 16:42:13.755172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.755203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.755397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.755428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.755611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.755643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.755752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.755781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.756040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.756290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.756320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.756583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.756614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.756737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.756767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.756898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.756928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.757118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.757151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.757287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.757318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.757522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.757552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.757671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.757701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.757897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.757929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.758121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.758156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.758321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.758583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.758615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.758795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.758825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.759062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.759101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.759313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.759343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.759531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.759561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.759679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.759710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.759830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.759860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.760050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.760080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.760296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.760328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.760501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.760531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.760631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.760661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.760792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.761048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.761079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.761368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.761401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.761576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.761606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.761736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.761767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.761884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.761914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.762039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.762071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.762273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.762304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.762560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.762592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.762762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.762792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.762889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.762919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.763108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.342 [2024-12-16 16:42:13.763140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.342 qpair failed and we were unable to recover it. 00:36:25.342 [2024-12-16 16:42:13.763350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.763381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.763642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.763672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.763791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.763828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.764011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.764041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.764177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.764209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.764446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.764478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.764739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.764769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.764883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.764915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.765139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.765173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.765390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.765421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.765690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.765722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.765891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.765922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.766116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.766149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.766277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.766307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.766546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.766576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.766816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.766848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.767029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.767059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.767259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.767292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.767478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.767509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.767638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.767667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.767863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.767894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.768158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.768189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.768357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.768388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.768570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.768601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.768792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.768823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.769003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.769034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.769217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.769248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.769416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.769446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.769706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.769737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.769905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.769934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.770055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.770087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.770271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.770303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.770515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.770547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.770742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.770771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.770873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.770903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.771088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.771129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.771296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.771328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.771599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.771629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.771745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.771776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.343 [2024-12-16 16:42:13.771886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.343 [2024-12-16 16:42:13.771915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.343 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.772111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.772144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.772325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.772356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.772615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.772646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.772764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.772794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.772911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.772941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.773139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.773173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.773377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.773409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.773523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.773553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.773733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.773764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.773964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.773995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.774108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.774139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.774398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.774430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.774704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.774734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.774924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.774956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.775182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.775214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.775329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.775359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.775496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.775528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.775743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.775774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.775959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.775990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.776161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.776192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.776427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.776457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.776626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.776657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.776826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.776857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.777074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.777286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.777319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.777521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.777552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.777739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.777769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.778004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.778036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.778155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.778187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.778372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.778402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.778589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.778866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.778897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.779078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.779117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.779222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.779252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.779441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.779472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.779673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.779705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.779823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.779853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.780117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.780150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.780330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.780360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.344 qpair failed and we were unable to recover it. 00:36:25.344 [2024-12-16 16:42:13.780569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.344 [2024-12-16 16:42:13.780599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.780771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.780800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.780918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.780950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.781132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.781164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.781358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.781466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.781496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.781692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.781722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.781984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.782015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.782217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.782250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.782506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.782536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.782652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.782684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.782787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.782817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.782918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.782949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.783080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.783118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.783245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.783276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.783468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.783498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.783628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.783657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.783841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.783870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.783997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.784028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.784170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.784201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.784316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.784346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.784551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.784583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.784704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.784735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.784848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.784878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.785064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.785103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.785285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.785317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.785565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.785595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.785854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.785886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.786067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.786105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.786364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.786395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.786499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.786530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.786799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.786829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.787009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.787046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.787170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.787202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.787399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.787430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.787546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.787576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.787838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.787869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.787991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.788023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.788291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.788324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.345 [2024-12-16 16:42:13.788461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.345 [2024-12-16 16:42:13.788492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.345 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.788604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.788636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.788739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.788770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.788964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.788995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.789192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.789226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.789354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.789504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.789535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.789707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.789738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.789916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.789948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.790124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.790157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.790356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.790389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.790656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.790688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.790907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.790939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.791128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.791162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.791424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.791454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.791692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.791724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.791838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.791869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.792001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.792032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.792200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.792232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.792418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.792449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.792722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.792760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.792892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.792924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.793029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.793060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.793334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.793367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.793506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.793537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.793741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.793772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.793890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.793922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.794104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.794136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.794254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.794285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.794560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.794591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.794711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.794741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.794922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.794954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.795141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.795173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.795438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.795469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.795659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.346 [2024-12-16 16:42:13.795691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.346 qpair failed and we were unable to recover it. 00:36:25.346 [2024-12-16 16:42:13.795811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.795842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.796109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.796142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.796337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.796367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.796635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.796666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.796915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.796947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.797147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.797180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.797441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.797660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.797691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.797860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.797891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.798066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.798107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.798224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.798256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.798381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.798412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.798621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.798651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.798847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.798878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.799069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.799109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.799348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.799381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.799514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.799545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.799668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.799699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.799901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.799933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.800230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.800262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.800511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.800542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.800733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.800765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.801024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.801056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.801183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.801215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.801388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.801419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.801547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.801577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.801832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.801869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.802054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.802085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.802212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.802243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.802413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.802442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.802717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.802748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.802867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.802896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.803092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.803136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.803337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.803369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.803491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.803523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.803719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.803750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.804011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.804043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.804345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.804380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.804635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.804667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.804954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.347 [2024-12-16 16:42:13.804984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.347 qpair failed and we were unable to recover it. 00:36:25.347 [2024-12-16 16:42:13.805235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.805269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.805407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.805440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.805650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.805786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.805817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.806078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.806119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.806306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.806336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.806516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.806547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.806782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.806813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.807065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.807105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.807242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.807273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.807399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.807429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.807557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.807587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.807780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.807811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.808010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.808047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.808192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.808225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.808343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.808374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.808619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.808651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.808853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.808883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.808989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.809019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.809218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.809251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.809427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.809458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.809572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.809603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.809795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.809827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.809933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.809963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.810150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.810183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.810343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.810375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.810500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.810531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.810857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.810927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.811145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.811183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.811321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.811354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.811592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.811623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.811724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.811756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.811943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.811974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.812168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.812201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.812379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.812412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.812616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.812647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.812838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.812869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.813148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.813182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.813456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.813488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.348 [2024-12-16 16:42:13.813685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.348 [2024-12-16 16:42:13.813717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.348 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.813988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.814029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.814193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.814226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.814416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.814447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.814638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.814670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.814911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.814942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.815115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.815147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.815291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.815323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.815566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.815836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.815867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.816050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.816081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.816276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.816307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.816431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.816462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.816594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.816626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.816834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.816865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.817133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.817166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.817282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.817313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.817571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.817602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.817810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.817841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.818090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.818132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.818413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.818444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.818704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.818736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.818930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.818961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.819244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.819276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.819521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.819552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.819687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.819719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.819905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.819936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.820258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.820290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.820467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.820498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.820667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.820699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.820987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.821017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.821213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.821246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.821362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.821393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.821583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.821614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.821743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.821774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.821963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.821994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.822263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.822295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.822549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.822581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.822815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.822846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.349 [2024-12-16 16:42:13.823083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.349 [2024-12-16 16:42:13.823124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.349 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.823390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.823421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.823609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.823646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.823919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.823951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.824065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.824103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.824217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.824249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.824456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.824623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.824654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.824896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.824927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.825103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.825137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.825398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.825429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.825530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.825561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.825790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.825822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.825937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.825968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.826151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.826184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.826378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.826409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.826597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.826629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.826753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.826784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.827020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.827051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.827330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.827362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.827534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.827565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.827837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.827868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.828148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.828180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.828438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.828469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.828670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.828702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.828938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.828969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.829213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.829246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.829438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.829468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.829680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.829711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.830000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.830032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.830190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.830222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.830489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.830520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.830688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.830719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.830975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.831005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.831184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.350 [2024-12-16 16:42:13.831217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.350 qpair failed and we were unable to recover it. 00:36:25.350 [2024-12-16 16:42:13.831402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.831432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.831624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.831655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.831841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.831871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.832046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.832078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.832275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.832307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.832427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.832458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.832642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.832673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.832933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.832969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.833242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.833275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.833538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.833570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.833846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.833876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.834161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.834194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.834392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.834423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.834660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.834690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.834976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.835007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.835193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.835226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.835459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.835490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.835704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.835735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.835905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.835936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.836223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.836256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.836514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.836545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.836670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.836701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.836888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.836919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.837160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.837192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.837455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.837485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.837734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.837765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.837879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.837910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.838079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.838117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.838289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.838320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.838583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.838615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.838916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.838947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.839230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.839263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.839526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.839558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.839828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.839859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.840106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.840139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.840311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.840342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.840550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.840580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.840839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.840869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.841131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.841163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.351 qpair failed and we were unable to recover it. 00:36:25.351 [2024-12-16 16:42:13.841343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.351 [2024-12-16 16:42:13.841374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.841566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.841597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.841808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.841845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.842105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.842136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.842383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.842414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.842703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.842733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.842974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.843004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.843175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.843208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.843398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.843707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.843738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.843859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.844078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.844120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.844305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.844335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.844454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.844486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.844732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.844763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.844931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.844962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.845154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.845187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.845312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.845342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.845459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.845490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.845695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.845727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.845850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.845880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.846088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.846131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.846320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.846352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.846650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.846681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.846944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.846975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.847274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.847307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.847573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.847603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.847786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.847818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.848080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.848119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.848290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.848322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.848610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.848641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.848821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.848852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.849109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.849141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.849401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.849433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.849728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.849758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.849885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.849916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.850200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.850232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.850418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.850450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.850698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.850729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.352 qpair failed and we were unable to recover it. 00:36:25.352 [2024-12-16 16:42:13.850994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.352 [2024-12-16 16:42:13.851025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.851309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.851342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.851592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.851623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.851879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.851911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.852092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.852132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.852370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.852403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.852694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.852724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.852894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.852926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.853182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.853215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.853497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.853526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.853811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.853843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.854048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.854079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.854260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.854291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.854534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.854564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.854762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.854794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.855030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.855061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.855256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.855289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.855578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.855608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.855805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.855836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.856028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.856058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.856330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.856363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.856640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.856670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.856952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.856984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.857234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.857268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.857392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.857423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.857537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.857568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.857828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.857858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.858029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.858061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.858343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.858375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.858635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.858667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.858914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.858944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.859121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.859153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.859445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.859682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.859714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.859929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.859960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.860248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.860281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.860399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.860435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.860670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.860701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.860950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.860980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.353 [2024-12-16 16:42:13.861236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.353 [2024-12-16 16:42:13.861269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.353 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.861478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.861508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.861695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.861726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.861840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.861871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.862083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.862147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.862473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.862504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.862766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.862798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.862917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.862948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.863209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.863243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.863351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.863381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.863498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.863529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.863782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.863814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.863984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.864015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.864188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.864221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.864508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.864538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.864750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.864781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.865070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.865111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.865377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.865408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.865599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.865631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.865895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.865926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.866184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.866217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.866390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.866421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.866690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.866722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.866983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.867014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.867128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.867161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.867294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.867325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.867586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.867617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.867869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.867900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.868071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.868112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.868286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.868317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.868485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.868516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.868800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.868831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.869072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.869111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.869376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.869407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.869597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.869628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.869918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.869949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.870192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.870224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.870465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.870796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.354 [2024-12-16 16:42:13.870966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.354 [2024-12-16 16:42:13.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.354 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.871208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.871241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.871436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.871467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.871683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.871715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.871896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.871926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.872188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.872221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.872512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.872543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.872832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.872863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.873133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.873166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.873454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.873485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.873752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.873783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.873967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.873998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.874131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.874165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.874436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.874468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.874734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.874766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.875045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.875077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.875281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.875314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.875583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.875613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.875789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.875819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.876105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.876138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.876332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.876363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.876571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.876603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.876732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.876763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.877005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.877037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.877229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.877260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.877509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.877542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.877807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.877838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.878122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.878155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.878434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.878466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.878735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.878766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.878951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.878983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.879250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.879284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.879495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.879527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.879790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.879821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.879956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.879988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.355 qpair failed and we were unable to recover it. 00:36:25.355 [2024-12-16 16:42:13.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.355 [2024-12-16 16:42:13.880260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.880524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.880556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.880797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.880828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.881104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.881143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.881390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.881422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.881668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.881700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.881912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.881944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.882154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.882187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.882387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.882418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.882680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.882711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.882896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.882928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.883124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.883156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.883351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.883383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.883644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.883675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.883964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.883994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.884274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.884308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.884571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.884602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.884854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.884885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.885149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.885182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.885476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.885506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.885778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.885809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.886108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.886141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.886411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.886443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.886711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.886743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.887034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.887066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.887341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.887372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.887661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.887693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.887965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.887996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.888193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.888225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.888399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.888431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.888632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.888664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.888935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.888966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.889234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.889267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.889560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.889591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.889860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.889892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.356 [2024-12-16 16:42:13.890090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.356 [2024-12-16 16:42:13.890145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.356 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.890341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.890372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.890476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.890508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.890772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.890802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.891071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.891113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.891335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.891365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.891535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.891567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.891783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.891813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.891986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.892024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.892293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.892327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.892431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.892461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.892769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.892800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.893056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.893087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.893348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.893380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.893576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.893611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.893884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.893915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.894113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.894145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.894320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.894352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.894595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.894626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.894820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.894853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.894985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.895016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.895210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.895243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.895439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.895473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.895752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.895784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.895978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.896009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.896146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.896179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.896302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.896334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.896436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.896467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.896731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.896763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.896979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.897011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.897305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.897338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.897584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.897615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.897841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.897872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.898049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.898079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.898354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.898387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.898512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.898542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.898738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.898773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.898981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.899013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.899270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.899304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.357 [2024-12-16 16:42:13.899571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.357 [2024-12-16 16:42:13.899602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.357 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.899849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.899880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.900153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.900186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.900466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.900497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.900811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.900843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.900974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.901004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.901272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.901306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.901416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.901447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.901647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.901678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.901949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.901988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.902169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.902203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.902380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.902411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.902517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.902550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.902815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.902847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.903039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.903071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.903340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.903371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.903545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.903577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.903847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.903878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.904172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.904205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.904476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.904507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.904701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.904733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.904995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.905026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.905314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.905347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.905615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.905648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.905940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.905972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.906250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.906284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.906554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.906586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.906763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.906795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.907069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.907115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.907295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.907327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.907543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.907574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.907767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.907800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.908066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.908112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.908244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.908276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.908541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.908573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.908818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.908850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.909048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.909080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.909342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.909374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.909569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.909601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.909800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.358 [2024-12-16 16:42:13.909831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.358 qpair failed and we were unable to recover it. 00:36:25.358 [2024-12-16 16:42:13.910049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.359 [2024-12-16 16:42:13.910081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.359 qpair failed and we were unable to recover it. 00:36:25.359 [2024-12-16 16:42:13.910378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.359 [2024-12-16 16:42:13.910410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.359 qpair failed and we were unable to recover it. 00:36:25.637 [2024-12-16 16:42:13.910597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.637 [2024-12-16 16:42:13.910628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.637 qpair failed and we were unable to recover it. 00:36:25.637 [2024-12-16 16:42:13.910899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.637 [2024-12-16 16:42:13.910932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.637 qpair failed and we were unable to recover it. 00:36:25.637 [2024-12-16 16:42:13.911202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.637 [2024-12-16 16:42:13.911236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.637 qpair failed and we were unable to recover it. 00:36:25.637 [2024-12-16 16:42:13.911438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.637 [2024-12-16 16:42:13.911471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.637 qpair failed and we were unable to recover it. 00:36:25.637 [2024-12-16 16:42:13.911726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.637 [2024-12-16 16:42:13.911758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.637 qpair failed and we were unable to recover it. 00:36:25.637 [2024-12-16 16:42:13.911955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.637 [2024-12-16 16:42:13.911986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.912115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.912150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.912396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.912434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.912637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.912669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.912889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.912922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.913211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.913245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.913436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.913467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.913681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.913713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.913986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.914018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.914207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.914241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.914509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.914541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.914718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.914751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.914944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.914977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.915223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.915257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.915558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.915590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.915786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.915819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.916108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.916142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.916321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.916353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.916571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.916604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.916812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.916844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.917024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.917056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.917335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.917367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.917638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.917670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.917959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.917992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.918288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.918323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.918585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.918616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.918889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.918922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.919218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.919252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.919451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.919483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.919738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.919770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.919949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.919980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.920229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.920263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.920561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.920591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.920737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.920769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.920983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.921123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.921156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.921429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.638 [2024-12-16 16:42:13.921461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.638 qpair failed and we were unable to recover it. 00:36:25.638 [2024-12-16 16:42:13.921658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.921691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.921800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.921833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.922113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.922146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.922361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.922394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.922574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.922606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.922733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.922770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.922970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.923002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.923224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.923257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.923509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.923542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.923742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.923774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.924053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.924086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.924288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.924321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.924554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.924587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.924854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.924887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.925062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.925103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.925288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.925320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.925569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.925602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.925874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.925906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.926046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.926079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.926348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.926380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.926581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.926614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.926769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.926951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.926983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.927160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.927195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.927387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.927420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.927670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.927701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.927836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.927869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.928070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.928114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.928313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.928345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.928544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.928576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.928773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.928806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.928986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.929019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.929272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.929306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.929417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.929448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.929639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.929671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.929936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.929969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.930168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.930203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.930393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.639 [2024-12-16 16:42:13.930425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.639 qpair failed and we were unable to recover it. 00:36:25.639 [2024-12-16 16:42:13.930687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.930720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.930993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.931024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.931220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.931255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.931454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.931486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.931664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.931696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.931857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.931889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.932137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.932170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.932423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.932462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.932638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.932753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.932785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.933032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.933064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.933201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.933235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.933426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.933459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.933638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.933670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.933868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.933901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.934048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.934081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.934206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.934239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.934507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.934539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.934717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.934750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.934953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.934986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.935173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.935208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.935429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.935461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.935656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.935688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.935900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.935932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.936132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.936166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.936369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.936401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.936550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.936582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.936705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.936737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.936932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.936964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.937218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.937252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.937446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.937478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.937657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.937689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.937870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.938089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.938130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.938333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.938367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.938618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.938650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.938824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.938857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.640 [2024-12-16 16:42:13.939119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.640 [2024-12-16 16:42:13.939153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.640 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.939427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.939459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.939711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.939743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.939866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.939899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.940181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.940215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.940400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.940432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.940570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.940601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.940860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.940892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.941111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.941144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.941331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.941364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.941501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.941538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.941715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.941748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.941876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.941907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.942163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.942196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.942375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.942612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.942643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.942829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.942861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.942990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.943021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.943269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.943303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.943428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.943460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.943656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.943687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.943941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.943973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.944152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.944186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.944305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.944336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.944527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.944560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.944704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.944736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.945029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.945061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.945252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.945286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.945474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.945505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.945777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.945809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.946056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.946086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.946300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.946333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.946588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.946619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.946896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.946928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.947206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.947240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.947449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.947480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.641 [2024-12-16 16:42:13.947672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.641 [2024-12-16 16:42:13.947704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.641 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.947899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.947931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.948141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.948174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.948356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.948388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.948664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.948695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.948962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.948995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.949200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.949233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.949354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.949386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.949651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.949683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.949888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.949919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.950109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.950143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.950341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.950372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.950567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.950599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.950783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.950814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.951065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.951114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.951430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.951462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.951572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.951603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.951800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.951832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.952015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.952046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.952330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.952506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.952538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.952736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.952767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.952875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.952908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.953108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.953141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.953345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.953376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.953568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.953599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.953781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.953813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.954032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.954064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.954264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.954297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.954499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.954530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.642 [2024-12-16 16:42:13.954826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.642 [2024-12-16 16:42:13.954859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.642 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.955134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.955168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.955347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.955378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.955670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.955702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.955903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.955935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.956137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.956171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.956446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.956479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.956782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.956813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.957073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.957115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.957420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.957452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.957596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.957629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.957910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.957943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.958070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.958113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.958316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.958349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.958620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.958651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.958911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.958942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.959220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.959254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.959504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.959535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.959735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.959767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.959970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.960002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.960144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.960177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.960383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.960414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.960687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.960922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.960954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.961075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.961126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.961338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.961370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.961590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.961623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.961893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.961924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.962058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.962090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.962409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.962443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.962582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.962614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.962897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.962929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.963157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.963191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.963399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.963432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.963620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.963652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.963928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.963961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.964167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.964200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.964393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.643 [2024-12-16 16:42:13.964426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.643 qpair failed and we were unable to recover it. 00:36:25.643 [2024-12-16 16:42:13.964705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.964737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.964991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.965023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.965224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.965258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.965504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.965536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.965817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.965962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.965993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.966209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.966243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.966428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.966460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.966660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.966692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.966913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.966945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.967146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.967179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.967378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.967410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.967608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.967638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.967837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.967869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.968132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.968165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.968460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.968491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.968787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.968819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.969002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.969033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.969318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.969351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.969612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.969644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.969928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.969960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.970166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.970199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.970393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.970424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.970622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.970654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.970951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.970982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.971255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.971289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.971518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.971556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.971735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.971766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.972038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.972070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.972306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.972560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.972591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.972771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.972802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.973084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.973126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.973390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.973421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.973601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.973633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.973856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.973887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.974132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.644 [2024-12-16 16:42:13.974358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.644 [2024-12-16 16:42:13.974390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.644 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.974573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.974604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.974790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.974821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.975075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.975119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.975335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.975367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.975542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.975573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.975765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.975797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.975991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.976021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.976293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.976327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.976479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.976511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.976786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.976818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.977075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.977118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.977335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.977365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.977667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.977869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.977901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.978115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.978149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.978383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.978415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.978688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.978720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.978840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.978872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.979159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.979192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.979449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.979481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.979789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.979822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.980086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.980129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.980405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.980436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.980717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.980748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.981000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.981032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.981347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.981380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.981562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.981594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.981899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.981932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.982192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.982225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.982500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.982533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.982827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.982859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.983132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.983166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.983362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.983393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.983644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.983676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.983950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.983982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.984189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.984222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.984417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.984448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.984647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.984679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.645 [2024-12-16 16:42:13.984882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.645 [2024-12-16 16:42:13.984913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.645 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.985130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.985164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.985414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.985445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.985691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.985723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.985955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.985988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.986249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.986283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.986595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.986626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.986805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.986836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.987092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.987135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.987414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.987445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.987663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.987696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.987950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.987982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.988258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.988291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.988504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.988778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.988809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.989059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.989091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.989315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.989347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.989550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.989588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.989795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.989826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.990105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.990140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.990411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.990443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.990728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.990760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.991041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.991073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.991364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.991396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.991505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.991537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.991801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.991833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.992136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.992169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.992306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.992338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.992615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.992648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.992921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.992952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.993245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.993279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.993555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.993588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.993795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.993827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.993975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.994009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.994211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.994244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.994484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.994516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.994742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.994773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.646 [2024-12-16 16:42:13.995040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.646 [2024-12-16 16:42:13.995075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.646 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.995315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.995347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.995498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.995531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.995796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.995829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.996111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.996148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.996428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.996461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.996733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.996765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.996983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.997015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.997148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.997181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.997297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.997328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.997543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.997574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.997802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.997834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.998028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.998058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.998280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.998313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.998568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.998599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.998911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.998944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.999141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.999174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.999365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.999397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.999674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:13.999706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:13.999971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.000003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.000271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.000310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.000474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.000677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.000708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.000988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.001020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.001148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.001182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.001433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.001466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.001716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.001747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.001926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.001958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.002164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.002198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.002450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.002482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.002746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.002778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.003054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.003086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.003325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.003357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.003638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.647 [2024-12-16 16:42:14.003671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.647 qpair failed and we were unable to recover it. 00:36:25.647 [2024-12-16 16:42:14.003903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.003936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.004263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.004297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.004513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.004546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.004855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.004887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.005092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.005147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.005294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.005325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.005527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.005559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.005741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.005775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.005974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.006006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.006199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.006233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.006512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.006544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.006734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.006765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.007034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.007070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.007389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.007422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.007629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.007661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.007842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.007875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.007999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.008031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.008218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.008252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.008512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.008544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.008834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.009145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.009179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.009381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.009413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.009710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.009742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.010011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.010044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.010259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.010293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.010496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.010529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.010782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.010827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.010959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.010992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.011231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.011266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.011454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.011487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.011787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.011820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.012041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.012245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.012279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.012475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.012508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.012700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.012731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.012949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.012982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.013182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.013216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.013407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.013440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.013626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.648 [2024-12-16 16:42:14.013659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.648 qpair failed and we were unable to recover it. 00:36:25.648 [2024-12-16 16:42:14.013921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.013953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.014086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.014132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.014337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.014369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.014666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.014699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.014988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.015021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.015343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.015376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.015615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.015648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.015766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.015798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.015995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.016027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.016301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.016335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.016517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.016551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.016766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.016798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.017024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.017057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.017347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.017381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.017592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.017627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.017876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.018105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.018139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.018294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.018326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.018579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.018611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.018736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.018769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.019053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.019085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.019375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.019410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.019615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.019648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.019843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.019876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.020000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.020034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.020365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.020544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.020579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.020774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.020814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.021038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.021071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.021248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.021280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.021481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.021514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.021718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.021750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.022029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.022061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.022269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.022303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.022484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.022515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.022743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.022776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.022985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.649 [2024-12-16 16:42:14.023017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.649 qpair failed and we were unable to recover it. 00:36:25.649 [2024-12-16 16:42:14.023270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.023303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.023418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.023453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.023590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.023622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.023898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.023930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.024241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.024276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.024492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.024524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.024708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.024740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.024967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.025224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.025257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.025534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.025566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.025837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.025869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.026190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.026223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.026495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.026527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.026683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.026716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.026994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.027026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.027265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.027398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.027431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.027574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.027606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.027882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.027914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.028107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.028140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.028355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.028388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.028638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.028935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.028967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.029226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.029260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.029469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.029501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.029805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.029837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.030125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.030161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.030437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.030469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.030614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.030646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.030935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.030970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.031119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.031162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.031391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.031426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.031678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.031711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.031971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.032002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.032278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.032312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.032560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.032593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.032823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.032855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.650 [2024-12-16 16:42:14.033117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.650 [2024-12-16 16:42:14.033151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.650 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.033370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.033403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.033594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.033626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.033820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.033853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.034046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.034079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.034359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.034393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.034590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.034623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.034827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.034861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.035118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.035151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.035328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.035360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.035552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.035584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.035908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.036163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.036197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.036396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.036430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.036704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.036736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.036934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.036967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.037227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.037261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.037444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.037477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.037682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.037714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.037841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.037873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.038088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.038132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.038334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.038366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.038638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.038671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.038951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.038984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.039286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.039319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.039583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.039615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.039832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.039864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.040124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.040157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.040457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.040489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.040631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.040663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.040937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.040970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.041187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.041222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.041493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.041525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.041734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.651 [2024-12-16 16:42:14.041773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.651 qpair failed and we were unable to recover it. 00:36:25.651 [2024-12-16 16:42:14.041998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.042031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.042267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.042300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.042574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.042606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.042798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.042831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.043009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.043041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.043368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.043402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.043652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.043684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.043985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.044018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.044316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.044350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.044548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.044580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.044854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.044887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.045079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.045123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.045318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.045350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.045534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.045566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.045875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.045910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.046199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.046232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.046371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.046404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.046591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.046623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.046839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.046870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.047149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.047183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.047469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.047502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.047723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.047754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.048027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.048060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.048278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.048312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.048567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.048600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.048812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.048844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.049029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.049062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.049218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.049252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.049508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.049540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.049743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.049774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.050070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.050116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.050340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.050372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.050518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.050550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.050823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.051105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.051139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.051270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.051301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.051572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.051604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.051879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.051912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.652 qpair failed and we were unable to recover it. 00:36:25.652 [2024-12-16 16:42:14.052226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.652 [2024-12-16 16:42:14.052259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.052451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.052489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.052636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.052667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.053011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.053043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.053308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.053342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.053541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.053573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.053821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.053853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.054075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.054118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.054311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.054343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.054579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.054611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.054815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.054848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.055107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.055140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.055269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.055301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.055503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.055535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.055725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.055756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.055963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.055996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.056196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.056229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.056363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.056394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.056616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.056649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.056866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.056899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.057092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.057144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.057274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.057306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.057532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.057563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.057679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.057711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.057987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.058019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.058170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.058203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.058325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.058357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.058559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.058592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.058908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.058940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.059124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.059158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.059279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.059311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.059588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.059620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.059825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.059856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.060129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.060163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.060439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.060470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.060756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.060789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.060900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.060932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.061073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.061114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.061391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.061423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.653 qpair failed and we were unable to recover it. 00:36:25.653 [2024-12-16 16:42:14.061628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.653 [2024-12-16 16:42:14.061660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.061902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.061934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.062182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.062232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.062429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.062460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.062715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.062746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.063039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.063071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.063321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.063354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.063637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.063668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.063810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.063841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.064088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.064135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.064317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.064349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.064467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.064499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.064765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.064797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.065076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.065136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.065296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.065329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.065532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.065564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.065771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.065805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.066054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.066087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.066330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.066363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.066566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.066901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.066933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.067053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.067085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.067335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.067368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.067576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.067609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.067814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.067845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.068068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.068111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.068295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.068327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.068470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.068502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.068681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.068713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.068972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.069005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.069146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.069181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.069410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.069442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.069591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.069623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.069833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.069865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.070046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.070078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.070293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.070325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.070600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.654 [2024-12-16 16:42:14.070632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.654 qpair failed and we were unable to recover it. 00:36:25.654 [2024-12-16 16:42:14.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.070934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.071230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.071263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.071535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.071568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.071857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.071889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.072005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.072036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.072182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.072221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.072485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.072517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.072667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.072697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.072878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.072910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.073216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.073251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.073512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.073545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.073818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.073850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.074152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.074185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.074385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.074417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.074563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.074595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.074862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.074893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.075173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.075207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.075341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.075372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.075553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.075585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.075794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.075827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.076026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.076058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.076340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.076373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.076608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.076897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.076929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.077163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.077197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.077391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.077424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.077631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.077663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.077849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.077881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.078202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.078236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.078426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.078458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.078663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.078695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.078875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.078907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.079111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.079145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.079277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.079310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.079562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.079593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.079800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.079832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.080023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.080056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.080206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.080240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.080497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.655 [2024-12-16 16:42:14.080529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.655 qpair failed and we were unable to recover it. 00:36:25.655 [2024-12-16 16:42:14.080685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.080717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.080921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.080953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.081165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.081199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.081399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.081431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.081635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.081668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.081941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.081974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.082200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.082240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.082549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.082766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.082798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.082995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.083026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.083281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.083315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.083616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.083648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.083861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.083893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.084166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.084199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.084415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.084447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.084644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.084675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.084861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.084892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.085167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.085202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.085396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.085428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.085562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.085593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.085850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.085883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.086131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.086165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.086294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.086326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.086532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.086564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.086794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.086827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.087116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.087149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.087355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.087387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.087638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.087671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.087861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.087893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.088081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.088125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.088329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.088361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.088613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.088645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.088948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.656 [2024-12-16 16:42:14.088980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.656 qpair failed and we were unable to recover it. 00:36:25.656 [2024-12-16 16:42:14.089123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.089157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.089305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.089337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.089543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.089574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.089681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.089713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.089991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.090023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.090304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.090338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.090525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.090557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.090773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.090805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.091055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.091087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.091240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.091273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.091421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.091452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.091649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.091681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.091890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.091922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.092060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.092107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.092241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.092273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.092523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.092555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.092855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.092888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.093193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.093227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.093430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.093462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.093614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.093647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.093940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.093971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.094155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.094188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.094403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.094435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.094624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.094656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.094847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.094880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.095135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.095170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.095373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.095405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.095544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.095576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.095831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.095863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.096120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.096153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.096330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.096531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.096564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.096866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.096898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.097137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.097171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.097355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.097388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.097647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.097678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.097898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.097930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.098072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.098121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.657 [2024-12-16 16:42:14.098311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.657 [2024-12-16 16:42:14.098342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.657 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.098569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.098833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.098867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.099155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.099189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.099506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.099538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.099807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.099839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.099972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.100197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.100229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.100359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.100391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.100604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.100742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.100774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.100903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.100934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.101131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.101164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.101444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.101476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.101779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.101811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.102077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.102126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.102339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.102371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.102571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.102604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.102797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.102829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.103110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.103144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.103360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.103391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.103598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.103630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.103890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.103922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.104130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.104163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.104434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.104466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.104589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.104620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.104815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.104846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.105170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.105204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.105340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.105371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.105576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.105609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.105794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.105826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.106016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.106048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.106336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.106369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.106498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.106529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.106803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.106835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.107119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.107157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.658 [2024-12-16 16:42:14.107299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.658 [2024-12-16 16:42:14.107331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.658 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.107534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.107567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.107812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.107843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.108129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.108162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.108437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.108469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.108763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.108797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.108960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.108993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.109201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.109236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1208305 Killed "${NVMF_APP[@]}" "$@" 00:36:25.659 [2024-12-16 16:42:14.109435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.109470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.109743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.109774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.109999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.110032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:25.659 [2024-12-16 16:42:14.110152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.110412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.110444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:25.659 [2024-12-16 16:42:14.110636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.110669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:25.659 [2024-12-16 16:42:14.110921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.110957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.111116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.111152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.111401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.111435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.659 [2024-12-16 16:42:14.111575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.111609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.111824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.111855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.111986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.112018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.112220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.112255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.112474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.112506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.112691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.112725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.112928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.112959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.113218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.113251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.113384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.113418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.113617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.113650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.113922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.113954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.114215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.114248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.114375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.114409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.114557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.114601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.114814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.114846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.115115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.115148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.115356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.115388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.115588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.115620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.115935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.659 [2024-12-16 16:42:14.115970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.659 qpair failed and we were unable to recover it. 00:36:25.659 [2024-12-16 16:42:14.116215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.116248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.116400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.116432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.116650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.116682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.116937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.116968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.117150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.117183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.117327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.117360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.117514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.117546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.117701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.117733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.117875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.117907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.118110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.118142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.118264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.118296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.118408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.118440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1209041 00:36:25.660 [2024-12-16 16:42:14.118646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.118682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.118878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.118910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1209041 00:36:25.660 [2024-12-16 16:42:14.119089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.119137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:25.660 [2024-12-16 16:42:14.119286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.119319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1209041 ']' 00:36:25.660 [2024-12-16 16:42:14.119446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.119480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.119683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.119717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.119905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.119944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.660 [2024-12-16 16:42:14.120167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.120204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.660 [2024-12-16 16:42:14.120359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.120394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.120577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.120614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.660 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.660 [2024-12-16 16:42:14.120945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.120980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.121201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.121235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.121434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.121470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.121648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.660 [2024-12-16 16:42:14.121678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.660 qpair failed and we were unable to recover it. 00:36:25.660 [2024-12-16 16:42:14.121911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.121943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.122152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.122185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.122399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.122431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.122613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.122652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.122763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.122797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.123054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.123087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.123312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.123345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.123479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.123511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.123780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.123812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.124007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.124040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.124228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.124261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.124422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.124651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.124684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.124863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.124895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.125016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.125048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.125210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.125246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.125401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.125434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.125574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.125607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.125833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.125865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.126139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.126174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.126380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.126412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.126612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.126645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.126956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.126988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.127282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.127316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.127460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.127493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.127692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.127723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.127904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.127938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.128186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.128220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.128375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.128407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.128656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.128689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.128994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.129027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.129285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.129318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.129464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.129497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.129799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.129831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.130119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.130153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.130358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.130392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.130528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.130561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.130712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.130745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.661 [2024-12-16 16:42:14.130873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.661 [2024-12-16 16:42:14.130905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.661 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.131181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.131217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.131401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.131434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.131704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.131739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.131872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.131904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.132208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.132249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.132388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.132421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.132677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.132710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.132937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.132971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.133176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.133209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.133396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.133429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.133580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.133613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.133933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.133969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.134199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.134234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.134449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.134483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.134816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.134849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.135151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.135184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.135366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.135398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.135646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.135678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.135937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.135970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.136158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.136192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.136417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.136450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.136671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.136702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.137001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.137036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.137284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.137316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.137443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.137475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.137665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.137956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.137989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.138174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.138208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.138393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.138425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.138629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.138662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.138950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.138982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.139226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.139261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.139416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.139448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.139702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.139735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.139994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.140027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.140269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.140302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.140533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.140698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.140733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.662 [2024-12-16 16:42:14.140867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.662 [2024-12-16 16:42:14.140899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.662 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.141170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.141204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.141453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.141485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.141689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.141722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.142004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.142037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.142335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.142369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.142565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.142597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.142887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.142921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.143117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.143151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.143404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.143435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.143739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.143771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.144036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.144068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.144289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.144322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.144509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.144541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.144756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.144789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.145001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.145033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.145251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.145285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.145442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.145474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.145592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.145625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.145766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.145798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.146085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.146131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.146334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.146366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.146554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.146586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.146945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.146978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.147175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.147209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.147413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.147445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.147626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.147658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.147873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.147904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.148184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.148217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.148423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.148454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.148565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.148598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.148869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.148901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.149158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.149191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.149413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.149452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.149702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.149734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.149986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.150018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.150272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.150306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.663 [2024-12-16 16:42:14.150528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.663 [2024-12-16 16:42:14.150560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.663 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.150761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.150794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.150947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.150979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.151183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.151216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.151360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.151393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.151620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.151652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.151780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.151812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.151955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.151987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.152118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.152150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.152343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.152375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.152502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.152534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.152728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.152760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.152959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.152991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.153290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.153323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.153452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.153486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.153631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.153662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.153795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.153827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.153959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.153991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.154141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.154175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.154287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.154318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.154438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.154470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.154663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.154695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.154837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.154868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.155112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.155236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.155268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.155379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.155411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.155612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.155644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.155913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.155946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.156088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.156140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.156280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.156311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.156431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.156462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.156674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.156705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.156910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.156941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.157128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.157161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.157282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.157314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.157495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.157527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.157671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.157708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.664 [2024-12-16 16:42:14.157860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.664 [2024-12-16 16:42:14.157893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.664 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.158016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.158048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.158256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.158288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.158418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.158450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.158649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.158680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.158803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.158834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.159035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.159066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.159438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.159512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.159740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.159776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.159909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.159943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.160055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.160087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.160357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.160390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.160575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.160607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.160816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.160848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.160968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.161001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.161196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.161230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.161382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.161414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.161600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.161633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.161764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.161796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.161905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.161937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.162052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.162084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.162303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.162342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.162475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.162507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.162729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.162761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.162944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.162976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.163174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.163206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.163524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.163562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.163703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.163735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.163870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.163901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.164153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.164186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.164324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.164356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.164568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.665 [2024-12-16 16:42:14.164601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.665 qpair failed and we were unable to recover it. 00:36:25.665 [2024-12-16 16:42:14.164731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.164763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.164959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.164991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.165124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.165158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.165342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.165372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.165501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.165533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.165661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.165694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.165888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.165920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.166032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.166064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.166338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.166372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.166508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.166539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.166716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.166748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.166936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.166971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.167175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.167209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.167348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.167379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.167518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.167549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.167670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.167702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.167828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.167859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.168039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.168070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.168220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.168253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.168439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.168471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.168577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.168608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.168726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.168765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.168953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.168985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.169091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.169139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.169336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.169368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.169544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.169576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.169777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.169955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.169987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.170173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.170207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.170351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.170382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.170489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.170521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.170790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.170820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.170946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.170978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.171112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.171145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.171399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.171431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.171566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.171601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.171806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.171837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.171941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.171972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.172089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.172132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.666 qpair failed and we were unable to recover it. 00:36:25.666 [2024-12-16 16:42:14.172314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.666 [2024-12-16 16:42:14.172345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.172457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.172489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.172651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.172682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.172954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.172986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.173135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.173168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.173303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.173335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.173446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.173477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.173726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.173758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.173891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.173922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.174170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.174216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.174459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.174489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.174616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.174648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.174772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.174803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.174913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.174944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.175065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.175107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.175292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.175325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.175525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.175556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.175775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.175806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.175995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.176055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:25.667 [2024-12-16 16:42:14.176121] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:25.667 [2024-12-16 16:42:14.176180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.176214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.176335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.176365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.176566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.176744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.176775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.176880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.176910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.177117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.177150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.177333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.177366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.177541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.177573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.177683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.177716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.177899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.177933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.178046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.178079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.178423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.178457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.178583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.178617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.178795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.178830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.178964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.178999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.179117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.179153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.179281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.667 [2024-12-16 16:42:14.179316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.667 qpair failed and we were unable to recover it. 00:36:25.667 [2024-12-16 16:42:14.179423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.179456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.179625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.179658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.179779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.179814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.179931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.179963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.180115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.180150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.180290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.180325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.180442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.180475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.180592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.180625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.180739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.180773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.181045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.181078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.181214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.181247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.181361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.181393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.181584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.181622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.181819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.181853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.182040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.182071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.182293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.182325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.182436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.182468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.182609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.182640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.182831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.182863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.183039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.183069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.183191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.183223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.183336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.183367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.183630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.183662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.183774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.183804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.183929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.183960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.184139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.184174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.184309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.184340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.184469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.184500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.184608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.184641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.184848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.184880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.184996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.185027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.185159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.185193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.185389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.185422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.185603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.185635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.185881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.185913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.186020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.186052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.186198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.186232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.186353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.186385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.668 [2024-12-16 16:42:14.186523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.668 [2024-12-16 16:42:14.186555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.668 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.186741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.186774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.186898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.186929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.187113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.187146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.187260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.187293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.187410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.187443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.187546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.187577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.187704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.187736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.187919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.187950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.188073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.188124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.188263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.188295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.188480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.188513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.188759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.188790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.188901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.188933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.189049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.189087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.189294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.189328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.189539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.189572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.189696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.189727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.189974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.190131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.190270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.190627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.190772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.190938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.190970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.191091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.191135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.191341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.191374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.191560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.191591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.191729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.191762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.191882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.192033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.192066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.192302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.192374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.192632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.192706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.192916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.192952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.193140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.193176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.193430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.193464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.193719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.193751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.193941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.193973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.194158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-12-16 16:42:14.194192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.669 qpair failed and we were unable to recover it. 00:36:25.669 [2024-12-16 16:42:14.194381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.194414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.194559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.194592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.194715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.194747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.194924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.194956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.195080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.195122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.195302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.195334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.195456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.195487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.195617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.195649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.195752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.195783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.195990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.196022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.196155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.196189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.196370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.196401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.196556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.196589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.196784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.196815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.197116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.197268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.197420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.197597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.197750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.197892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.197999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.198032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.198228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.198262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.198457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.198490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.198735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.198767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.199014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.199046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.199172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.199206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.199338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.199370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.199480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.199512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.199635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.199667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.199858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.199892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.200013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.200044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.200182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.200216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.670 qpair failed and we were unable to recover it. 00:36:25.670 [2024-12-16 16:42:14.200327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-12-16 16:42:14.200359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.200469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.200501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.200617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.200649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.200754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.200792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.200993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.201025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.201131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.201165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.201293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.201325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.201498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.201529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.201718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.201750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.201862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.201893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.202047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.202079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.202214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.202247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.202452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.202485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.202606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.202636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.202746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.202778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.202893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.202924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.203042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.203074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.203263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.203295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.203423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.203454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.203563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.203595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.203747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.203779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.203886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.203918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.204028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.204059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.204205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.204243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.204430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.204462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.204587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.204619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.204736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.204769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.204945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.204976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.205083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.205134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.205266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.205298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.205407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.205440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.205614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.205646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.205777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.205808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.206109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.206144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.206344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.206377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.206500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.206532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.206712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.206746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.206862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.671 [2024-12-16 16:42:14.207066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.671 [2024-12-16 16:42:14.207108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.671 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.207228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.207260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.207398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.207558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.207590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.207783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.207814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.208013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.208045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.208174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.208208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.208322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.208605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.208636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.208753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.208785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.208913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.208943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.209189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.209222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.209272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa43c70 (9): Bad file descriptor 00:36:25.672 [2024-12-16 16:42:14.209441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.209500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.209625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.209668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.209793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.209825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.209999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.210030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.210134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.210168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.210434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.210466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.210570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.210602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.210727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.210759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.210940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.210972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.211159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.211193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.211310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.211341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.211481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.211513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.211636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.211667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.211789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.211821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.212010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.212041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.212173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.212206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.212331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.212364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.212584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.212617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.212755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.212787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.212914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.212946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.213192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.213226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.213420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.213451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.213577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.213608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.213751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.213783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.213907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.213938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.214073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.214119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.214237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.672 [2024-12-16 16:42:14.214275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.672 qpair failed and we were unable to recover it. 00:36:25.672 [2024-12-16 16:42:14.214387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.214423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.214543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.214574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.214681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.214712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.214837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.214869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.215117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.215151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.215330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.215362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.215553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.215585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.215691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.215723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.215964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.215995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.216253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.216287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.216408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.216440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.216590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.216622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.216808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.216838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.216949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.217114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.217147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.217277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.217308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.217433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.217465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.217642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.217672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.217865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.217897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.218020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.218051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.218301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.218335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.218509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.218541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.218644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.218675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.218828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.218859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.219043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.219074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.219196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.219228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.219406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.219437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.219620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.219651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.219832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.219862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.219988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.220019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.220155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.220190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.220375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.220407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.220598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.673 [2024-12-16 16:42:14.220630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.673 qpair failed and we were unable to recover it. 00:36:25.673 [2024-12-16 16:42:14.220813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.220845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.221021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.221174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.221206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.221323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.221355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.221532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.221563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.221754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.221786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.221919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.221957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.222118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.222298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.222329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.222437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.222468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.222705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.222737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.222923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.222953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.223062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.223104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.223218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.223249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.223360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.223392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.223521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.223552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.223724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.223755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.223936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.223966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.224160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.224194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.224345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.224376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.224555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.224587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.224704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.224735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.224838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.224871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.674 [2024-12-16 16:42:14.224979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.674 [2024-12-16 16:42:14.225010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.674 qpair failed and we were unable to recover it. 00:36:25.950 [2024-12-16 16:42:14.225127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.950 [2024-12-16 16:42:14.225159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.950 qpair failed and we were unable to recover it. 00:36:25.950 [2024-12-16 16:42:14.225357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.950 [2024-12-16 16:42:14.225389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.950 qpair failed and we were unable to recover it. 00:36:25.950 [2024-12-16 16:42:14.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.950 [2024-12-16 16:42:14.225547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.950 qpair failed and we were unable to recover it. 00:36:25.950 [2024-12-16 16:42:14.225750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.950 [2024-12-16 16:42:14.225782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.950 qpair failed and we were unable to recover it. 00:36:25.950 [2024-12-16 16:42:14.225951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.950 [2024-12-16 16:42:14.225982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.950 qpair failed and we were unable to recover it. 00:36:25.950 [2024-12-16 16:42:14.226106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.226138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.226310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.226340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.226452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.226483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.226590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.226640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.226909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.226941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.227114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.227147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.227249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.227280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.227419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.227450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.227572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.227603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.227730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.227762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.227865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.227896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.228034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.228066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.228294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.228337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.228509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.228703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.228735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.228840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.228871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.229049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.229082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.229310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.229352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.229545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.229576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.229769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.229801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.229922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.229953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.230076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.230120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.230301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.230333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.230448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.230479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.230587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.230619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.230814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.230846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.230968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.230999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.231116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.231350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.231382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.231558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.231589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.231776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.231808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.232030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.232062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.232381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.232578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.232608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.232795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.232826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.233065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.233105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.233327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.233358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.233498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.233529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.951 qpair failed and we were unable to recover it. 00:36:25.951 [2024-12-16 16:42:14.233748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.951 [2024-12-16 16:42:14.233779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.234075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.234235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.234269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.234442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.234474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.234666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.234699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.234832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.234863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.235064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.235107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.235314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.235347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.235465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.235496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.235626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.235657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.235934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.235966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.236149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.236182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.236364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.236396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.236621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.236671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.236870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.236902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.237146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.237178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.237372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.237404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.237555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.237587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.237829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.237861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.238104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.238144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.238325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.238356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.238622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.238655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.238859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.238890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.239134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.239167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.239418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.239450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.239577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.239608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.239817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.239849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.240038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.240070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.240266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.240298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.240472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.240503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.240774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.240805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.240946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.240978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.241225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.241258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.241436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.241468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.241613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.241645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.241892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.241923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.242124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.242157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.242350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.242382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.242620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.242653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.952 qpair failed and we were unable to recover it. 00:36:25.952 [2024-12-16 16:42:14.242856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.952 [2024-12-16 16:42:14.242887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.243068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.243106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.243348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.243380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.243528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.243560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.243811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.243841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.244050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.244082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.244214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.244245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.244489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.244521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.244812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.244845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.245082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.245136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.245281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.245313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.245559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.245591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.245769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.246127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.246324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.246356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.246478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.246509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.246724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.246755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.246885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.246917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.247151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.247185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.247394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.247425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.247683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.247720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.247910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.247942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.248221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.248254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.248531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.248562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.248841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.248874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.249044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.249075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.249187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.249220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.249407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.249439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.249726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.249758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.249992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.250023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.250276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.250308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.250480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.250512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.250707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.250739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.251007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.251038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.251187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.251220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.251482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.251513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.251799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.251830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.252118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.252151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.252360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.953 qpair failed and we were unable to recover it. 00:36:25.953 [2024-12-16 16:42:14.252604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.953 [2024-12-16 16:42:14.252635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.252816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.252848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.253085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.253161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.253340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.253371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.253628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.253659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.253926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.253958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.254213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.254246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.254434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.254465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.254697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.254758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.255045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.255079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.255230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.255263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.255442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.255473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.255732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.255764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.255956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.255988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.256211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.256244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.256476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.256508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.256693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.256724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.256922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.256954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.257213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.257246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.257434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.257465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.257655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.257688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.257811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.257851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.258148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.258356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.258389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.258517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.258549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.258813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.258844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.259045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.259077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.259377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.259408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.259595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.259627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.259841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.259872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.260053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.260086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.260242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.260274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.260454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.260486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.260617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.260648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.260981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.261012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.261140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.261174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.261412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.261444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.261632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.261664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.261837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.261868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.954 qpair failed and we were unable to recover it. 00:36:25.954 [2024-12-16 16:42:14.262128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.954 [2024-12-16 16:42:14.262161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.262377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.262409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.262583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:25.955 [2024-12-16 16:42:14.262595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.262625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.262811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.262843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.263112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.263145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.263351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.263383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.263554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.263586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.263807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.263838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.264085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.264123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.264319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.264350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.264532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.264564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.264754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.264785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.265045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.265077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.265384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.265416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.265601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.265633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.265838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.266076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.266120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.266385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.266416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.266683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.266716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.267007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.267038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.267356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.267389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.267572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.267604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.267817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.267856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.268130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.268164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.268352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.268382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.268591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.268623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.268898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.268930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.269119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.269151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.269338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.269369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.269506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.269539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.269680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.269710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.269852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.269884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.270079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.955 qpair failed and we were unable to recover it. 00:36:25.955 [2024-12-16 16:42:14.270342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.955 [2024-12-16 16:42:14.270373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.270586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.270619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.270811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.270856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.271104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.271138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.271331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.271363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.271486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.271518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.271810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.271845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.272033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.272065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.272285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.272328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.272542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.272576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.272782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.272815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.272993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.273026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.273143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.273177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.273298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.273329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.273568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.273600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.273849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.273882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.274028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.274061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.274316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.274348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.274521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.274675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.274705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.274971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.275003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.275185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.275217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.275421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.275451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.275664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.275695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.275887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.275919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.276183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.276216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.276498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.276530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.276787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.276819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.276933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.276964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.277126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.277168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.277363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.277395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.277519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.277672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.277702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.277810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.277841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.278123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.278155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.278327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.278357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.278544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.278575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.278798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.278831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.279065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.956 [2024-12-16 16:42:14.279104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.956 qpair failed and we were unable to recover it. 00:36:25.956 [2024-12-16 16:42:14.279229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.279262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.279474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.279506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.279674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.279705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.279824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.279862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.280040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.280073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.280189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.280221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.280464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.280495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.280636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.280668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.280955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.280986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.281185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.281219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.281458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.281490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.281753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.281784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.281954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.281985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.282174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.282207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.282455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.282489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.282736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.282771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.282992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.283024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.283161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.283195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.283387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.283419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.283607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.283638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.283881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.283916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.284088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.284130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.284374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.284407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.284648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.284682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.284871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.284903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.285072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.285116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.285250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.285281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.285371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:25.957 [2024-12-16 16:42:14.285405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:25.957 [2024-12-16 16:42:14.285412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:25.957 [2024-12-16 16:42:14.285418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:25.957 [2024-12-16 16:42:14.285424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:25.957 [2024-12-16 16:42:14.285475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.285506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.285634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.285672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.285874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.285906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.286105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.286138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.286280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.286311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.286545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.286575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.286772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.286804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.287068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.286969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:25.957 [2024-12-16 16:42:14.287110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.957 qpair failed and we were unable to recover it. 00:36:25.957 [2024-12-16 16:42:14.287078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:25.957 [2024-12-16 16:42:14.287120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:25.957 [2024-12-16 16:42:14.287250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.957 [2024-12-16 16:42:14.287282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.958 [2024-12-16 16:42:14.287120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.287508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.287820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.287870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.288066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.288111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.288292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.288323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.288525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.288569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.288751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.288782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.289045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.289077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.289279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.289310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.289514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.289546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.289763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.289794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.289976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.290006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.290202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.290234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.290427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.290457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.290646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.290677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.290946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.290977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.291218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.291251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.291444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.291475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.291613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.291645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.291857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.291890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.292131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.292164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.292288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.292318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.292578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.292610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.292894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.292925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.293124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.293156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.293292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.293322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.293587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.293619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.293904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.293935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.294075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.294116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.294291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.294321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.294513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.294546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.294754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.294785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.295037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.295069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.295216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.295248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.295436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.295467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.295659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.295689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.295929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.295960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.296228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.296262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.296476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.958 [2024-12-16 16:42:14.296508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.958 qpair failed and we were unable to recover it. 00:36:25.958 [2024-12-16 16:42:14.296752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.296784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.296904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.296935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.297215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.297247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.297387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.297556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.297588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.297771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.297802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.297987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.298025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.298213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.298247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.298386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.298418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.298598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.298629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.298818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.298850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.299141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.299174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.299365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.299396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.299580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.299612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.299801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.299832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.300114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.300150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.300339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.300370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.300561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.300595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.300771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.300803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.301084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.301127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.301284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.301318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.301456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.301487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.301762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.301796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.302049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.302082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.302324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.302357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.302620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.302654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.302776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.302808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.303072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.303114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.303303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.303336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.303482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.303514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.303635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.303667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.303923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.959 [2024-12-16 16:42:14.303956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.959 qpair failed and we were unable to recover it. 00:36:25.959 [2024-12-16 16:42:14.304152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.304186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.304463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.304512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.304736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.304772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.304967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.305000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.305197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.305232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.305433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.305468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.305595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.305628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.305922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.305959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.306144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.306178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.306363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.306397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.306534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.306568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.306833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.306866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.307078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.307120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.307300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.307332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.307465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.307508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.307705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.307738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.307922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.307955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.308138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.308171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.308358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.308390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.308652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.308687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.308857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.308889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.309080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.309120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.309314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.309347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.309475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.309507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.309638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.309672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.309866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.309900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.310027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.310059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.310201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.310235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.310393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.310429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.310541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.310574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.310800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.310834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.310949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.310982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.311188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.311222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.311368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.311400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.311542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.311575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.311751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.311783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.312009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.312042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.312294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.312329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.312460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.312492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.960 qpair failed and we were unable to recover it. 00:36:25.960 [2024-12-16 16:42:14.312661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.960 [2024-12-16 16:42:14.312694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.312968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.313001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.313253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.313312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.313596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.313644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.313946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.313979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.314258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.314294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.314422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.314454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.314597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.314627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.314766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.314796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.314913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.314943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.315149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.315184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.315320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.315350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.315599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.315631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.315820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.315851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.316133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.316167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.316303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.316523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.316555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.316892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.316923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.317038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.317070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.317340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.317372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.317559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.317590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.317775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.317807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.318048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.318079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.318231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.318262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.318440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.318472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.318663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.318693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.318925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.318956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.319239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.319274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.319512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.319543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.319676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.319715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.319891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.319921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.320180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.320213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.320429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.320460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.320628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.320659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.320933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.320964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.321241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.321273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.321559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.321590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.321809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.321841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.322010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.322040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.322315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.961 [2024-12-16 16:42:14.322348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.961 qpair failed and we were unable to recover it. 00:36:25.961 [2024-12-16 16:42:14.322473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.322503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.322694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.322725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.322987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.323020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.323225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.323259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.323449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.323481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.323704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.323835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.323866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.324054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.324086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.324365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.324398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.324681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.324713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.324915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.324946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.325059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.325091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.325272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.325302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.325487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.325518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.325787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.325817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.326029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.326060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.326259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.326298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.326569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.326600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.326891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.326922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.327117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.327150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.327424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.327456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.327596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.327627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.327814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.327846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.328114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.328148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.328330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.328362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.328506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.328538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.328787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.328819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.329053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.329085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.329332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.329366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.329471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.329503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.329754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.330021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.330053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.330277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.330309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.330450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.330482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.330802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.330834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.331017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.331048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.331193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.331226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.331422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.331454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.331633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.331665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.962 [2024-12-16 16:42:14.331798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.962 [2024-12-16 16:42:14.331829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.962 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.332089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.332130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.332244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.332275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.332561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.332593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.332883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.332922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.333110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.333144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.333402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.333434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.333724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.333756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.334030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.334063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.334276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.334316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.334583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.334614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.334808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.334840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.334975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.335006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.335177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.335210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.335428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.335459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.335726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.335758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.336017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.336048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.336286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.336319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.336515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.336691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.336722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.336921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.336953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.337168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.337201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.337440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.337475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.337591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.337623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.337806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.337839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.338026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.338058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.338261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.338294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.338558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.338590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.338878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.339071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.339125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.339317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.339348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.339587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.339743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.339774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.340068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.340113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.340298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.340328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.340501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.340532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.340757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.340788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.341005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.341037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.341294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.341327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.341544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.341575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.963 qpair failed and we were unable to recover it. 00:36:25.963 [2024-12-16 16:42:14.341761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.963 [2024-12-16 16:42:14.341792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.342029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.342061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.342356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.342403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.342681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.342713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.343007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.343039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.343206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.343243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.343440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.343471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.343732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.344009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.344041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.344256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.344289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.344477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.344509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.344771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.344803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.345105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.345137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.345338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.345371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.345571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.345603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.345799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.345830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.346082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.346125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.346320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.346351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.346598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.346630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.346762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.346793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.346964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.346995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.347165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.347197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.347391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.347423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.347634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.347664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.347835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.347865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.348068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.348106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.348308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.348338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.348577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.348608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.348862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.348895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.349009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.349039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.349245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.349278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.349422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.349453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.349633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.349674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.349955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.349988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.350165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.350198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.964 qpair failed and we were unable to recover it. 00:36:25.964 [2024-12-16 16:42:14.350313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.964 [2024-12-16 16:42:14.350343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.350594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.350625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.350822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.350852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.351132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.351165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.351338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.351368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.351563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.351594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.351811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.351842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.352042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.352073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.352320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.352350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.352524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.352555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.352791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.352821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.352996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.353027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.353212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.353244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.353411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.353442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.353616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.353646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.353955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.353985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.354236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.354267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.354450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.354480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.354726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.354758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.355064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.355105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.355296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.355326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.355538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.355569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.355828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.355858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.355969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.356000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.356249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.356282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.356499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.356531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.356668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.356955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.356986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.357190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.357223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.357410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.357441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.357650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.357681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.357960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.357990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.358249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.358282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.358392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.358422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.358617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.358649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.358832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.358862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.359105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.359138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.359320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.359351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.359558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.359609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.359799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.965 [2024-12-16 16:42:14.359830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.965 qpair failed and we were unable to recover it. 00:36:25.965 [2024-12-16 16:42:14.360016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.360047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.360325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.360359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.360552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.360583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.360864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.360896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.361112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.361144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.361386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.361418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.361557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.361588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.361788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.361821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.362058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.362090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.362315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.362485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.362517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.362756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.362796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.362968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.362999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.363182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.363215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.363406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.363437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.363646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.363678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.363853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.364122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.364154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.364357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.364389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.364515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.364546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.364680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.364711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.364927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.364959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.365180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.365212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.365331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.365363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.365548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.365578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.365753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.365784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.366056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.366087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.366374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.366407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.366544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.366575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.366715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.366747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.366951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.366983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.367171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.367205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.367336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.367368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.367490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.367522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.367691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.367722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.367958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.367990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.368189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.368221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.368410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.368442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.368653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.368695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.966 qpair failed and we were unable to recover it. 00:36:25.966 [2024-12-16 16:42:14.368946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.966 [2024-12-16 16:42:14.368978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.369258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.369293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.369570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.369601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.369873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.369903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.370075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.370117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.370357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.370388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.370596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.370627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.370826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.370857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.371039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.371071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.371339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.371370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.371607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.371638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.371866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.371900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.372158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.372193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.372370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.372401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.372642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.372673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.372937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.372968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.373157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.373190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.373329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.373358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.373496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.373526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.373807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.373838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.374105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.374137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.374319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.374349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.374524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.374556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.967 [2024-12-16 16:42:14.374696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.374729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:25.967 [2024-12-16 16:42:14.374969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.375006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.375223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:25.967 [2024-12-16 16:42:14.375433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.375466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.375649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.375681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.375894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.375926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.967 [2024-12-16 16:42:14.376193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.376225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.376503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.376533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.376710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.376741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.376876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.376906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.967 qpair failed and we were unable to recover it. 00:36:25.967 [2024-12-16 16:42:14.377122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.967 [2024-12-16 16:42:14.377156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.377345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.377375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.377502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.377533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.377793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.377825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.378110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.378159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.378375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.378407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.378538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.378569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.378688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.378720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.378933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.378965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.379229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.379265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.379508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.379541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.379829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.379862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.380061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.380104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.380345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.380377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.380615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.380646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.380871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.380902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.381111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.381143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.381292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.381323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.381547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.381578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.381770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.381801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.382003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.382035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.382183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.382216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.382405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.382436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.382568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.382599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.382826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.382856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.382972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.383003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.383243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.383276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.383420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.383451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.383585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.383618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.968 qpair failed and we were unable to recover it. 00:36:25.968 [2024-12-16 16:42:14.383847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.968 [2024-12-16 16:42:14.383879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.384064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.384105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.384295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.384333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.384576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.384608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.384744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.384775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.384955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.384986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.385194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.385227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.385359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.385390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.385522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.385554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.385684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.385716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.385905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.385935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.386185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.386217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.386355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.386387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.386521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.386552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.386692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.386722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.386841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.386872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.387062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.387106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.387219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.387250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.387441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.387475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.387595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.387627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.387807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.387839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.388016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.388048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.388201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.388234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.388376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.388408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.388537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.388568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.388708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.388739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.388909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.388940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.389133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.389166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.389294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.389324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.389458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.389489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.389615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.389646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.389897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.389928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.390121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.390154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.390342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.390373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.390564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.390596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.390787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.390818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.390960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.390990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.391129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.391162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.391345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.391375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.969 [2024-12-16 16:42:14.391578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.969 [2024-12-16 16:42:14.391609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.969 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.391728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.391759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.391962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.391993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.392230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.392262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa35cd0 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.392468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.392517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.392701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.392734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.392976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.393007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.393213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.393247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.393382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.393414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.393598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.393630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.393844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.393876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.393991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.394023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.394223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.394255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.394443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.394476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.394659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.394690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.394899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.394931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.395199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.395234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.395441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.395480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.395671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.395703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.395879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.395911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.396103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.396136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.396262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.396295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.396485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.396519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.396663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.396694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.396912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.396945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.397131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.397165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.397341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.397372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.397513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.397665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.397696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.397812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.397843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.398105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.398138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.398339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.398371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.398558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.398590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.398839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.398872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.398996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.399027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.399227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.399260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.399443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.399475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.399664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.399696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.399955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.399987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.970 [2024-12-16 16:42:14.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.970 [2024-12-16 16:42:14.400196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.970 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.400316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.400349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.400561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.400593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.400781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.400812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.401116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.401149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b0000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.401354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.401394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.401529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.401560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.401672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.401702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.402010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.402041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.402193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.402225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.402366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.402396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.402585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.402616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.402831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.402862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.403040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.403073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.403213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.403243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.403377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.403406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.403612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.403645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.403911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.403941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.404119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.404152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.404351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.404382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.404597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.404629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.404835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.404866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.405057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.405087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.405316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.405347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.405485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.405516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.405656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.405687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.405896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.405927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.406167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.406392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.406424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.406573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.406604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.406809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.406839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.407024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.407057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.407221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.407253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.407401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.407434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.407552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.407582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.407822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.407852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.408057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.408089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.408229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.408261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.971 [2024-12-16 16:42:14.408396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.971 [2024-12-16 16:42:14.408427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.971 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.408661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.408694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.408882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.408913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.409058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.409089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.409305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.409336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.409571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.409603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.409881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.409913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.410116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.410155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.410348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.410379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.410488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.410519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.410708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.410739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.410977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.411007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.411190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.411223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.411348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.411377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.411566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.411597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.411820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.972 [2024-12-16 16:42:14.411851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.412068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.412115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.412247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.412279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:25.972 [2024-12-16 16:42:14.412468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.412500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.972 [2024-12-16 16:42:14.412762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.412796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.972 [2024-12-16 16:42:14.412974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.413007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.413262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.413294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.413484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.413515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.413626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.413658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.413924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.413955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.414197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.414230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.414369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.414400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.414587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.414617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.414825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.414856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.415063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.415102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.415308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.415340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.415539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.415569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.415917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.415949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.416158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.416191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.416361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.416392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.972 [2024-12-16 16:42:14.416534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.972 [2024-12-16 16:42:14.416564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.972 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.416808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.416837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.417087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.417137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.417280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.417310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.417432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.417463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.417605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.417635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.417816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.417846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.418028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.418060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.418211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.418244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.418491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.418618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.418661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.418790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.418820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.419023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.419220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.419254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.419398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.419427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.419541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.419571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.419748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.419780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.420039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.420070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.420225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.420257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.420463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.420493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.420618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.420649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.420877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.420907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.421170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.421203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.421445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.421475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.421626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.421659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.421894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.421925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.422212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.422244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.422419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.422450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.422591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.422837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.422867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.423178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.423211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.423406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.423438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.423571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.973 [2024-12-16 16:42:14.423602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.973 qpair failed and we were unable to recover it. 00:36:25.973 [2024-12-16 16:42:14.423897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.423928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.424108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.424140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.424327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.424358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.424645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.424677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.424971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.425001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.425208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.425239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.425381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.425411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.425600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.425631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.425747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.425778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.425913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.425943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.426238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.426270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.426463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.426494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.426685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.426716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.426954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.426985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.427109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.427141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.427284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.427315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.427604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.427636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.427849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.427885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.428130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.428163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.428352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.428383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.428512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.428543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.428715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.428745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.428970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.429001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.429196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.429227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.429527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.429558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.429855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.429886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.430054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.430084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.430286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.430317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.430507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.430538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.430721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.430752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.430869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.430900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.431049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.431081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.431232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.431264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.431433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.431463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.431604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.431636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.431835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.431866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.432058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.432088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.432267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.974 [2024-12-16 16:42:14.432298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.974 qpair failed and we were unable to recover it. 00:36:25.974 [2024-12-16 16:42:14.432485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.432516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.432654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.432684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.432943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.432974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.433255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.433288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.433499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.433531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.433673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.433703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.433920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.433952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.434235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.434267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.434374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.434405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.434645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.434676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.434919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.434950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.435156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.435295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.435326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.435462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.435493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.435618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.435648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.435828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.435858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.436063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.436103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.436311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.436343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.436491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.436522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.436733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.436769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.437013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.437045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.437280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.437312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.437442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.437474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.437645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.437676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.437882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.437913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.438106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.438138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.438329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.438360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.438484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.438514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.438617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.438649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.438911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.438941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.439120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.439153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.439336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.439366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.439632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.439839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.439870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.440038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.440069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.440274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.440305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.440495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.440527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.440814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.440845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.975 qpair failed and we were unable to recover it. 00:36:25.975 [2024-12-16 16:42:14.441113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.975 [2024-12-16 16:42:14.441145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.441277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.441309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.441491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.441523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.441664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.441694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.441900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.441930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.442146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.442177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.442414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.442449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.442626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.442837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.442867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.442998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.443030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.443249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.443280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.443412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.443443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.443559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.443590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.443873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.443903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.444161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.444193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.444454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.444484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.444817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.444850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.445136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.445168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.445427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.445458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.445732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.445764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.445946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.446172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.446211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.446470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.446501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.446671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.446702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.446984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.447014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.447208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.447241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.447481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.447511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.447699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.447731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.447917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.447948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.448212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.448509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.448540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.448809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.448841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.448959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.448989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.449253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.449286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.449479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.449509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.449701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.976 [2024-12-16 16:42:14.449733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.976 qpair failed and we were unable to recover it. 00:36:25.976 [2024-12-16 16:42:14.449926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.449956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.450156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.450187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.450394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.450424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.450525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.450556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.450743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.450777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.451039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.451071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.451351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.451383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.451662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.451695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.451883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.451914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.452110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.452142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.452403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.452435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.452608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.452642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.452820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.452852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.453143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.453177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.453463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.453494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.453628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.453660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.453920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.453951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.454170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.454202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.454383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.454414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.454601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.454632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.454896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.454926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.455140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.455172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.455311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.455341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.455545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.455576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.455845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.455875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.456122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.456161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 Malloc0 00:36:25.977 [2024-12-16 16:42:14.456285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.456316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.456503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.456534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.456794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.456824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.977 [2024-12-16 16:42:14.457080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.457123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.457258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.457290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:25.977 [2024-12-16 16:42:14.457471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.457502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.457692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.977 [2024-12-16 16:42:14.457723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.457847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.457877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.457981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.458012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.458276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.458309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.458561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.977 [2024-12-16 16:42:14.458591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.977 qpair failed and we were unable to recover it. 00:36:25.977 [2024-12-16 16:42:14.458848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.458880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.459147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.459180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.459393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.459424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.459654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.459685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.459876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.459907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.460078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.460117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.460238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.460268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.460526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.460558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.460833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.460864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.461047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.461077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.461277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.461308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.461544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.461574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.461856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.461887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.462153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.462193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.462385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.462415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.462670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.462701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.462913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.462943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.463043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.463073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.463268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.463299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.463563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.463594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.463768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.463785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.978 [2024-12-16 16:42:14.463797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.464046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.464076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.464270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.464301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.464562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.464592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.464727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.464756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.464965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.464996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.465256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.465299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.465554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.465586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.465865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.465895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.466123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.466156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.466337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.466368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.466603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.466800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.466831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.978 qpair failed and we were unable to recover it. 00:36:25.978 [2024-12-16 16:42:14.467018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.978 [2024-12-16 16:42:14.467048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.467336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.467368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.467487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.467517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.467636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.467666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.467910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.467941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.468239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.468271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.468412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.468442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.468640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.468670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.468943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.468973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.469075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.469114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.469328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.469357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.469595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.469625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.469797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.469827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.469996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.470027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.470308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.470340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.470579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.470609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.470816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.470847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.471131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.471163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.471445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.471476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.471730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.471761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.471985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.472016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.472264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.472296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.472484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.979 [2024-12-16 16:42:14.472515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.472705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.472737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.472861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:25.979 [2024-12-16 16:42:14.472892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.473155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.473187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.979 [2024-12-16 16:42:14.473392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.473423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.979 [2024-12-16 16:42:14.473668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.473699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.473962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.473993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.474213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.474245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.474478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.474509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.474688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.474725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.474934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.474964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.475135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.475166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.475343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.475372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.979 [2024-12-16 16:42:14.475632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.979 [2024-12-16 16:42:14.475663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.979 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.475963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.475993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.476165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.476197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.476461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.476492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.476660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.476690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.476860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.476890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.477151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.477420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.477450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.477703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.477733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.477987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.478017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.478270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.478302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.478541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.478572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.478830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.478860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.479067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.479103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.479391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.479422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.479661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.479692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.479939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.479970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.480144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.480177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.480373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.480403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.980 [2024-12-16 16:42:14.480583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.480614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.480872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.480902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b9 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:25.980 0 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.481171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.481203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4b4000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.980 [2024-12-16 16:42:14.481497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.980 [2024-12-16 16:42:14.481548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.481819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.481852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.482120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.482165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.482443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.482474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.482616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.482647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.482907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.482939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.483224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.483257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.483452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.483483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.483771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.483802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.484054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.484085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.484365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.484397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.484659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.484690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.484971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.485002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.485242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.485274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.485521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.485552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.485750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.980 [2024-12-16 16:42:14.485781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.980 qpair failed and we were unable to recover it. 00:36:25.980 [2024-12-16 16:42:14.486046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.486300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.486332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.486561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.486593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.486831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.486862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.486999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.487030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.487197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.487231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.487477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.487507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.487759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.487790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.488047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.488078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.488346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.488378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.981 [2024-12-16 16:42:14.488658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.488690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:25.981 [2024-12-16 16:42:14.488970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.489002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.489169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.489201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.981 [2024-12-16 16:42:14.489478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.489510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.981 [2024-12-16 16:42:14.489695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.489726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.489964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.489995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.490281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.490314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.490581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.490612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.490828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.490859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.491147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.491180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.491385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.491415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.491676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.491713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.491897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.981 [2024-12-16 16:42:14.491928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4bc000b90 with addr=10.0.0.2, port=4420 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.491996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.981 [2024-12-16 16:42:14.494465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:25.981 [2024-12-16 16:42:14.494592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:25.981 [2024-12-16 16:42:14.494638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:25.981 [2024-12-16 16:42:14.494661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:25.981 [2024-12-16 16:42:14.494681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:25.981 [2024-12-16 16:42:14.494730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.981 [2024-12-16 16:42:14.504378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:25.981 [2024-12-16 16:42:14.504467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:25.981 [2024-12-16 16:42:14.504506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:25.981 [2024-12-16 16:42:14.504524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:25.981 [2024-12-16 16:42:14.504544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:25.981 [2024-12-16 16:42:14.504586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.981 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1208524 00:36:25.981 [2024-12-16 16:42:14.514437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:25.981 [2024-12-16 16:42:14.514540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:25.981 [2024-12-16 16:42:14.514566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:25.981 [2024-12-16 16:42:14.514578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:25.981 [2024-12-16 16:42:14.514589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:25.981 [2024-12-16 16:42:14.514621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.524390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:25.981 [2024-12-16 16:42:14.524455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:25.981 [2024-12-16 16:42:14.524473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:25.981 [2024-12-16 16:42:14.524481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:25.981 [2024-12-16 16:42:14.524489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:25.981 [2024-12-16 16:42:14.524508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:25.981 qpair failed and we were unable to recover it. 00:36:25.981 [2024-12-16 16:42:14.534349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:25.982 [2024-12-16 16:42:14.534408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:25.982 [2024-12-16 16:42:14.534421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:25.982 [2024-12-16 16:42:14.534428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:25.982 [2024-12-16 16:42:14.534433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:25.982 [2024-12-16 16:42:14.534447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:25.982 qpair failed and we were unable to recover it. 00:36:26.241 [2024-12-16 16:42:14.544392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.241 [2024-12-16 16:42:14.544450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.241 [2024-12-16 16:42:14.544463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.241 [2024-12-16 16:42:14.544470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.241 [2024-12-16 16:42:14.544475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.241 [2024-12-16 16:42:14.544489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.241 qpair failed and we were unable to recover it. 00:36:26.241 [2024-12-16 16:42:14.554402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.241 [2024-12-16 16:42:14.554459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.241 [2024-12-16 16:42:14.554472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.241 [2024-12-16 16:42:14.554479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.241 [2024-12-16 16:42:14.554484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.241 [2024-12-16 16:42:14.554498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.241 qpair failed and we were unable to recover it. 00:36:26.241 [2024-12-16 16:42:14.564429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.241 [2024-12-16 16:42:14.564512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.241 [2024-12-16 16:42:14.564526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.241 [2024-12-16 16:42:14.564532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.241 [2024-12-16 16:42:14.564538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.564554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.574560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.574618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.574632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.574639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.574645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.574659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.584507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.584563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.584577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.584584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.584590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.584604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.594554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.594611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.594625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.594632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.594638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.594652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.604535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.604593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.604607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.604616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.604622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.604636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.614574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.614634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.614649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.614656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.614662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.614677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.624593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.624645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.624659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.624665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.624671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.624686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.634630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.634697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.634710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.634717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.634722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.634736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.644645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.644703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.644717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.644723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.644729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.644747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.654683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.654739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.654753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.654759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.654765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.654779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.664706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.664762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.664776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.664782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.664788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.664802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.674727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.674788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.674802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.674809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.674814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.674828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.684765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.684823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.684837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.684844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.684850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.684864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.694794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.242 [2024-12-16 16:42:14.694850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.242 [2024-12-16 16:42:14.694864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.242 [2024-12-16 16:42:14.694870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.242 [2024-12-16 16:42:14.694876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.242 [2024-12-16 16:42:14.694890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.242 qpair failed and we were unable to recover it. 00:36:26.242 [2024-12-16 16:42:14.704853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.704907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.704920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.704927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.704932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.704946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.714857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.714918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.714932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.714939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.714944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.714959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.724902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.724966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.724980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.724986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.724992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.725006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.734903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.734958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.734975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.734982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.734987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.735001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.744956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.745012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.745025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.745032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.745038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.745051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.754954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.755025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.755039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.755045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.755051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.755065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.765005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.765063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.765077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.765083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.765089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.765108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.775026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.775086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.775104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.775110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.775121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.775136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.785060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.785124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.785138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.785144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.785150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.785165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.795082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.795151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.795165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.795171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.795177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.795191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.805112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.805173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.805187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.805193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.805199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.805213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.815154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.815210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.815225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.815231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.815237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.815252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.825208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.825261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.825275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.825281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.243 [2024-12-16 16:42:14.825287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.243 [2024-12-16 16:42:14.825301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.243 qpair failed and we were unable to recover it. 00:36:26.243 [2024-12-16 16:42:14.835205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.243 [2024-12-16 16:42:14.835263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.243 [2024-12-16 16:42:14.835277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.243 [2024-12-16 16:42:14.835284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.244 [2024-12-16 16:42:14.835289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.244 [2024-12-16 16:42:14.835304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.244 qpair failed and we were unable to recover it. 00:36:26.244 [2024-12-16 16:42:14.845273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.244 [2024-12-16 16:42:14.845330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.244 [2024-12-16 16:42:14.845344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.244 [2024-12-16 16:42:14.845350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.244 [2024-12-16 16:42:14.845355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.244 [2024-12-16 16:42:14.845369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.244 qpair failed and we were unable to recover it. 00:36:26.503 [2024-12-16 16:42:14.855270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.503 [2024-12-16 16:42:14.855329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.503 [2024-12-16 16:42:14.855343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.503 [2024-12-16 16:42:14.855349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.503 [2024-12-16 16:42:14.855355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.503 [2024-12-16 16:42:14.855368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.503 qpair failed and we were unable to recover it. 00:36:26.503 [2024-12-16 16:42:14.865282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.503 [2024-12-16 16:42:14.865337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.503 [2024-12-16 16:42:14.865354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.503 [2024-12-16 16:42:14.865360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.503 [2024-12-16 16:42:14.865366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.503 [2024-12-16 16:42:14.865380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.503 qpair failed and we were unable to recover it. 00:36:26.503 [2024-12-16 16:42:14.875322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.503 [2024-12-16 16:42:14.875376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.503 [2024-12-16 16:42:14.875390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.503 [2024-12-16 16:42:14.875397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.875403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.875417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.885347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.885401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.885415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.885421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.885427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.885441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.895373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.895426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.895440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.895446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.895452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.895466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.905399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.905465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.905479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.905486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.905500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.905515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.915432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.915493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.915508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.915514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.915520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.915534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.925703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.925761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.925775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.925782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.925788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.925803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.935478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.935533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.935546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.935553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.935559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.935573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.945497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.945550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.945564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.945570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.945576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.945589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.955562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.955616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.955631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.955637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.955643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.955657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.965592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.965661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.965675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.965681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.965687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.965700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.975593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.975655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.975669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.975676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.975681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.975696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.985624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.985683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.985697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.985703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.985709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.985724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:14.995671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:14.995754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:14.995769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:14.995776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:14.995782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:14.995795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.504 [2024-12-16 16:42:15.005691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.504 [2024-12-16 16:42:15.005755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.504 [2024-12-16 16:42:15.005770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.504 [2024-12-16 16:42:15.005776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.504 [2024-12-16 16:42:15.005783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.504 [2024-12-16 16:42:15.005797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.504 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.015726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.015779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.015793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.015800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.015806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.015820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.025729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.025782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.025796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.025803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.025809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.025822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.035760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.035818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.035832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.035841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.035847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.035861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.045733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.045798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.045812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.045818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.045824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.045838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.055815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.055912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.055926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.055932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.055938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.055952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.065850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.065916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.065930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.065936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.065942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.065956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.075857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.075962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.075978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.075984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.075990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.076009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.085904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.085966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.085980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.085987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.085993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.086006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.095921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.096007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.096021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.096028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.096034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.096048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.505 [2024-12-16 16:42:15.105948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.505 [2024-12-16 16:42:15.106001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.505 [2024-12-16 16:42:15.106016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.505 [2024-12-16 16:42:15.106022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.505 [2024-12-16 16:42:15.106028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.505 [2024-12-16 16:42:15.106043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.505 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.115992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.116046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.116061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.116067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.765 [2024-12-16 16:42:15.116074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.765 [2024-12-16 16:42:15.116088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.765 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.126025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.126083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.126102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.126109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.765 [2024-12-16 16:42:15.126114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.765 [2024-12-16 16:42:15.126129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.765 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.136049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.136107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.136121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.136127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.765 [2024-12-16 16:42:15.136133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.765 [2024-12-16 16:42:15.136148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.765 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.146062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.146115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.146129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.146135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.765 [2024-12-16 16:42:15.146141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.765 [2024-12-16 16:42:15.146155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.765 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.156138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.156207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.156221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.156227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.765 [2024-12-16 16:42:15.156234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.765 [2024-12-16 16:42:15.156248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.765 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.166134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.166188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.166205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.166212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.765 [2024-12-16 16:42:15.166218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.765 [2024-12-16 16:42:15.166232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.765 qpair failed and we were unable to recover it. 00:36:26.765 [2024-12-16 16:42:15.176079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.765 [2024-12-16 16:42:15.176143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.765 [2024-12-16 16:42:15.176157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.765 [2024-12-16 16:42:15.176164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.176170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.176184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.186177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.186279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.186292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.186298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.186304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.186319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.196251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.196311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.196325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.196332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.196338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.196352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.206249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.206306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.206321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.206327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.206333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.206350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.216282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.216341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.216356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.216363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.216369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.216383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.226303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.226353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.226368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.226374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.226380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.226394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.236326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.236383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.236396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.236403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.236409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.236422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.246380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.246438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.246452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.246458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.246464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.246478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.256389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.256443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.256457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.256463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.256469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.256482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.266422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.266479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.266493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.266500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.266505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.266520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.276445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.276497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.276512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.276518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.276524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.276538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.286471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.286529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.286543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.286550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.286556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.286570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.296612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.296679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.296696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.296702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.296708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.296723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.306565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.306622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.306636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.306643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.766 [2024-12-16 16:42:15.306648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.766 [2024-12-16 16:42:15.306663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.766 qpair failed and we were unable to recover it. 00:36:26.766 [2024-12-16 16:42:15.316658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.766 [2024-12-16 16:42:15.316763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.766 [2024-12-16 16:42:15.316777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.766 [2024-12-16 16:42:15.316784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.767 [2024-12-16 16:42:15.316790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.767 [2024-12-16 16:42:15.316804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.767 qpair failed and we were unable to recover it. 00:36:26.767 [2024-12-16 16:42:15.326617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.767 [2024-12-16 16:42:15.326674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.767 [2024-12-16 16:42:15.326688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.767 [2024-12-16 16:42:15.326694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.767 [2024-12-16 16:42:15.326700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.767 [2024-12-16 16:42:15.326713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.767 qpair failed and we were unable to recover it. 00:36:26.767 [2024-12-16 16:42:15.336648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.767 [2024-12-16 16:42:15.336704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.767 [2024-12-16 16:42:15.336718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.767 [2024-12-16 16:42:15.336724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.767 [2024-12-16 16:42:15.336733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.767 [2024-12-16 16:42:15.336747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.767 qpair failed and we were unable to recover it. 00:36:26.767 [2024-12-16 16:42:15.346628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.767 [2024-12-16 16:42:15.346680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.767 [2024-12-16 16:42:15.346693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.767 [2024-12-16 16:42:15.346699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.767 [2024-12-16 16:42:15.346705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.767 [2024-12-16 16:42:15.346719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.767 qpair failed and we were unable to recover it. 00:36:26.767 [2024-12-16 16:42:15.356710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.767 [2024-12-16 16:42:15.356812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.767 [2024-12-16 16:42:15.356826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.767 [2024-12-16 16:42:15.356832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.767 [2024-12-16 16:42:15.356838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.767 [2024-12-16 16:42:15.356851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.767 qpair failed and we were unable to recover it. 00:36:26.767 [2024-12-16 16:42:15.366631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:26.767 [2024-12-16 16:42:15.366688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:26.767 [2024-12-16 16:42:15.366702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:26.767 [2024-12-16 16:42:15.366708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:26.767 [2024-12-16 16:42:15.366714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:26.767 [2024-12-16 16:42:15.366728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:26.767 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.376738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.376790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.376804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.376811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.376816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.376831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.386794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.386854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.386868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.386874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.386880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.386895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.396776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.396831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.396845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.396852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.396857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.396872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.406814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.406867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.406881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.406887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.406893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.406907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.416836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.416894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.416908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.416915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.416921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.416935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.426867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.426920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.426936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.426943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.426948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.426962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.436857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.436916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.436929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.436936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.436941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.436955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.446922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.446978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.446993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.446999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.447005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.447018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.456968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.457050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.457064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.457071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.457076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.457090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.466985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.467037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.467052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.467062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.467067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.467082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.476983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.477047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.477062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.477068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.477074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.477088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.487050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.487125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.487154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.487165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.487171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.487191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.496989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.497054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.497069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.497075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.497081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.497099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.507099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.507161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.507176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.507182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.507188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.507202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.517135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.517195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.517210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.517217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.517223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.517238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.527152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.027 [2024-12-16 16:42:15.527227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.027 [2024-12-16 16:42:15.527241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.027 [2024-12-16 16:42:15.527248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.027 [2024-12-16 16:42:15.527254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.027 [2024-12-16 16:42:15.527268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.027 qpair failed and we were unable to recover it. 00:36:27.027 [2024-12-16 16:42:15.537119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.537213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.537227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.537233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.537239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.537253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.547197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.547256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.547270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.547276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.547282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.547296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.557291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.557386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.557400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.557406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.557412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.557426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.567197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.567254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.567268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.567275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.567280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.567295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.577237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.577293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.577307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.577313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.577319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.577333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.587315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.587367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.587381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.587388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.587394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.587408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.597380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.597461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.597475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.597485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.597491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.597504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.607388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.607442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.607455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.607462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.607467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.607481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.617334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.617389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.617403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.617409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.617415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.617429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.028 [2024-12-16 16:42:15.627368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.028 [2024-12-16 16:42:15.627422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.028 [2024-12-16 16:42:15.627436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.028 [2024-12-16 16:42:15.627442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.028 [2024-12-16 16:42:15.627448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.028 [2024-12-16 16:42:15.627462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.028 qpair failed and we were unable to recover it. 00:36:27.288 [2024-12-16 16:42:15.637471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.288 [2024-12-16 16:42:15.637528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.288 [2024-12-16 16:42:15.637543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.288 [2024-12-16 16:42:15.637549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.288 [2024-12-16 16:42:15.637555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.288 [2024-12-16 16:42:15.637573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.288 qpair failed and we were unable to recover it. 00:36:27.288 [2024-12-16 16:42:15.647436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.288 [2024-12-16 16:42:15.647494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.288 [2024-12-16 16:42:15.647508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.288 [2024-12-16 16:42:15.647515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.288 [2024-12-16 16:42:15.647520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.288 [2024-12-16 16:42:15.647534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.288 qpair failed and we were unable to recover it. 00:36:27.288 [2024-12-16 16:42:15.657467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.288 [2024-12-16 16:42:15.657526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.288 [2024-12-16 16:42:15.657539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.288 [2024-12-16 16:42:15.657546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.288 [2024-12-16 16:42:15.657552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.288 [2024-12-16 16:42:15.657566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.288 qpair failed and we were unable to recover it. 00:36:27.288 [2024-12-16 16:42:15.667579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.288 [2024-12-16 16:42:15.667632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.288 [2024-12-16 16:42:15.667646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.288 [2024-12-16 16:42:15.667652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.288 [2024-12-16 16:42:15.667658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.288 [2024-12-16 16:42:15.667672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.288 qpair failed and we were unable to recover it. 00:36:27.288 [2024-12-16 16:42:15.677529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.288 [2024-12-16 16:42:15.677585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.288 [2024-12-16 16:42:15.677599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.288 [2024-12-16 16:42:15.677606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.288 [2024-12-16 16:42:15.677612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.288 [2024-12-16 16:42:15.677625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.687602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.687664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.687677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.687684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.687690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.687704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.697574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.697623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.697637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.697643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.697649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.697664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.707598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.707651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.707665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.707671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.707677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.707691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.717691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.717747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.717762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.717768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.717774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.717789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.727651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.727707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.727724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.727731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.727737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.727751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.737733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.737790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.737804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.737810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.737816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.737830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.747764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.747818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.747832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.747838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.747844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.747858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.757761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.757818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.757831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.757838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.757844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.757858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.767782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.767839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.767854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.767860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.767866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.767885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.777854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.777932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.777945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.777952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.777957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.777972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.787917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.787971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.787986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.787992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.787998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.788012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.797844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.797899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.797913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.797919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.797925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.797939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.807873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.807970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.807984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.289 [2024-12-16 16:42:15.807990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.289 [2024-12-16 16:42:15.807996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.289 [2024-12-16 16:42:15.808010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.289 qpair failed and we were unable to recover it. 00:36:27.289 [2024-12-16 16:42:15.817901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.289 [2024-12-16 16:42:15.817956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.289 [2024-12-16 16:42:15.817971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.817977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.817983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.817997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.828054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.828156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.828170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.828176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.828182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.828196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.838049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.838110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.838124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.838130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.838136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.838150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.847986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.848041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.848054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.848060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.848066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.848080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.858082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.858147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.858165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.858174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.858180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.858194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.868032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.868089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.868108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.868117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.868125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.868142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.878058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.878115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.878129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.878135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.878141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.878155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.290 [2024-12-16 16:42:15.888164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.290 [2024-12-16 16:42:15.888218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.290 [2024-12-16 16:42:15.888232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.290 [2024-12-16 16:42:15.888239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.290 [2024-12-16 16:42:15.888244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.290 [2024-12-16 16:42:15.888259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.290 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.898135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.898239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.898253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.898259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.898268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.898283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.908213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.908264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.908278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.908284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.908290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.908304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.918282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.918339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.918353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.918360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.918366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.918381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.928281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.928340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.928354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.928360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.928366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.928380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.938313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.938412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.938426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.938432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.938438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.938451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.948380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.948441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.948455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.948462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.948468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.948482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.958363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.958439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.958452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.958458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.958464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.550 [2024-12-16 16:42:15.958478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.550 qpair failed and we were unable to recover it. 00:36:27.550 [2024-12-16 16:42:15.968402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.550 [2024-12-16 16:42:15.968462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.550 [2024-12-16 16:42:15.968476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.550 [2024-12-16 16:42:15.968483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.550 [2024-12-16 16:42:15.968489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:15.968502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:15.978447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:15.978503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:15.978517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:15.978523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:15.978529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:15.978543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:15.988454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:15.988507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:15.988523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:15.988530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:15.988536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:15.988549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:15.998483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:15.998554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:15.998567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:15.998574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:15.998580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:15.998593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.008505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.008570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.008585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.008591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.008597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.008612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.018467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.018522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.018536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.018542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.018548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.018563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.028555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.028612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.028626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.028635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.028641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.028656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.038581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.038639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.038652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.038659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.038664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.038678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.048615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.048671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.048684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.048691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.048697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.048711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.058623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.058678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.058693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.058699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.058705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.058719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.068672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.068728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.068742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.068749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.068755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.068770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.078699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.078775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.078789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.078796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.078801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.078816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.088734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.088792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.088805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.088812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.088817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.088831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.098754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.551 [2024-12-16 16:42:16.098808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.551 [2024-12-16 16:42:16.098821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.551 [2024-12-16 16:42:16.098828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.551 [2024-12-16 16:42:16.098833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.551 [2024-12-16 16:42:16.098847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.551 qpair failed and we were unable to recover it. 00:36:27.551 [2024-12-16 16:42:16.108770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.552 [2024-12-16 16:42:16.108819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.552 [2024-12-16 16:42:16.108833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.552 [2024-12-16 16:42:16.108840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.552 [2024-12-16 16:42:16.108846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.552 [2024-12-16 16:42:16.108860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.552 qpair failed and we were unable to recover it. 00:36:27.552 [2024-12-16 16:42:16.118868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.552 [2024-12-16 16:42:16.118975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.552 [2024-12-16 16:42:16.118990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.552 [2024-12-16 16:42:16.118996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.552 [2024-12-16 16:42:16.119002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.552 [2024-12-16 16:42:16.119016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.552 qpair failed and we were unable to recover it. 00:36:27.552 [2024-12-16 16:42:16.128847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.552 [2024-12-16 16:42:16.128903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.552 [2024-12-16 16:42:16.128917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.552 [2024-12-16 16:42:16.128924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.552 [2024-12-16 16:42:16.128929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.552 [2024-12-16 16:42:16.128943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.552 qpair failed and we were unable to recover it. 00:36:27.552 [2024-12-16 16:42:16.138872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.552 [2024-12-16 16:42:16.138925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.552 [2024-12-16 16:42:16.138939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.552 [2024-12-16 16:42:16.138945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.552 [2024-12-16 16:42:16.138951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.552 [2024-12-16 16:42:16.138965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.552 qpair failed and we were unable to recover it. 00:36:27.552 [2024-12-16 16:42:16.148889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.552 [2024-12-16 16:42:16.148966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.552 [2024-12-16 16:42:16.148980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.552 [2024-12-16 16:42:16.148986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.552 [2024-12-16 16:42:16.148992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.552 [2024-12-16 16:42:16.149005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.552 qpair failed and we were unable to recover it. 00:36:27.812 [2024-12-16 16:42:16.158911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.812 [2024-12-16 16:42:16.158968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.812 [2024-12-16 16:42:16.158982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.812 [2024-12-16 16:42:16.158992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.812 [2024-12-16 16:42:16.158997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.812 [2024-12-16 16:42:16.159011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.812 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.168964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.169018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.169032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.169039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.169045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.169058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.178997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.179056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.179070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.179076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.179082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.179102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.189010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.189063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.189076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.189082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.189087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.189106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.199092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.199200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.199213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.199220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.199226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.199243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.209080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.209145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.209160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.209167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.209173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.209187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.219102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.219203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.219217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.219223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.219229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.219243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.229121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.229173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.229187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.229193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.229199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.229213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.239164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.239224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.239238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.239244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.239250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.239264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.249198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.249259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.249273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.249279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.249285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.249300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.259230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.259280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.259294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.259300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.259306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.259320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.269295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.269351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.269365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.269371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.269377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.269391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.279321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.279376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.279390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.279396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.279402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.279416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.289245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.813 [2024-12-16 16:42:16.289300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.813 [2024-12-16 16:42:16.289316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.813 [2024-12-16 16:42:16.289323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.813 [2024-12-16 16:42:16.289329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.813 [2024-12-16 16:42:16.289343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.813 qpair failed and we were unable to recover it. 00:36:27.813 [2024-12-16 16:42:16.299341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.299395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.299409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.299416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.299422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.299436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.309297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.309356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.309370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.309377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.309383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.309397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.319402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.319469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.319483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.319489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.319495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.319509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.329420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.329477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.329491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.329497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.329506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.329521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.339420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.339474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.339487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.339493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.339499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.339512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.349467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.349524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.349537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.349544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.349550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.349564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.359546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.359621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.359635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.359642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.359647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.359662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.369546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.369609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.369623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.369629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.369635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.369649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.379528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.379585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.379599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.379606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.379612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.379627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.389567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.389619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.389633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.389639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.389645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.389660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.399613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.399670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.399684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.399690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.399696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.399710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:27.814 [2024-12-16 16:42:16.409638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:27.814 [2024-12-16 16:42:16.409698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:27.814 [2024-12-16 16:42:16.409712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:27.814 [2024-12-16 16:42:16.409719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:27.814 [2024-12-16 16:42:16.409724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:27.814 [2024-12-16 16:42:16.409739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:27.814 qpair failed and we were unable to recover it. 00:36:28.074 [2024-12-16 16:42:16.419714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.074 [2024-12-16 16:42:16.419771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.074 [2024-12-16 16:42:16.419787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.074 [2024-12-16 16:42:16.419794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.074 [2024-12-16 16:42:16.419799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.074 [2024-12-16 16:42:16.419814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.074 qpair failed and we were unable to recover it. 00:36:28.074 [2024-12-16 16:42:16.429689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.074 [2024-12-16 16:42:16.429743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.074 [2024-12-16 16:42:16.429757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.074 [2024-12-16 16:42:16.429763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.074 [2024-12-16 16:42:16.429769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.074 [2024-12-16 16:42:16.429782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.074 qpair failed and we were unable to recover it. 00:36:28.074 [2024-12-16 16:42:16.439767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.074 [2024-12-16 16:42:16.439872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.074 [2024-12-16 16:42:16.439886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.074 [2024-12-16 16:42:16.439892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.074 [2024-12-16 16:42:16.439898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.074 [2024-12-16 16:42:16.439912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.449752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.449811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.449825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.449831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.449837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.449851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.459798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.459853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.459867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.459874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.459884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.459898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.469815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.469872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.469886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.469893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.469899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.469913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.479873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.479931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.479945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.479951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.479957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.479971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.489906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.489963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.489977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.489983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.489989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.490003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.499896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.499949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.499962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.499969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.499974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.499989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.509928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.509984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.509998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.510005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.510010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.510024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.519951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.520010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.520024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.520030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.520036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.520051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.529985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.530043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.530057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.530064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.530070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.530084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.540019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.540076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.540090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.540100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.540107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.540120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.550045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.550105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.550122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.550129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.550135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.550149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.560141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.560196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.560210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.560217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.560222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.560236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.570105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.570182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.075 [2024-12-16 16:42:16.570196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.075 [2024-12-16 16:42:16.570202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.075 [2024-12-16 16:42:16.570208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.075 [2024-12-16 16:42:16.570222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.075 qpair failed and we were unable to recover it. 00:36:28.075 [2024-12-16 16:42:16.580121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.075 [2024-12-16 16:42:16.580172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.580186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.580192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.580197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.580211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.590150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.590205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.590218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.590227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.590233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.590247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.600185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.600242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.600255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.600262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.600267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.600281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.610202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.610257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.610271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.610278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.610283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.610298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.620261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.620317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.620331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.620337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.620343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.620356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.630304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.630357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.630371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.630377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.630383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.630396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.640333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.640423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.640437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.640443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.640449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.640464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.650332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.650389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.650403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.650410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.650415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.650428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.660357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.660419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.660433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.660440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.660446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.660459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.670390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.670445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.670459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.670466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.670471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.670485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.076 [2024-12-16 16:42:16.680417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.076 [2024-12-16 16:42:16.680474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.076 [2024-12-16 16:42:16.680487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.076 [2024-12-16 16:42:16.680493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.076 [2024-12-16 16:42:16.680499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.076 [2024-12-16 16:42:16.680513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.076 qpair failed and we were unable to recover it. 00:36:28.336 [2024-12-16 16:42:16.690469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.336 [2024-12-16 16:42:16.690527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.336 [2024-12-16 16:42:16.690541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.336 [2024-12-16 16:42:16.690547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.336 [2024-12-16 16:42:16.690553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.336 [2024-12-16 16:42:16.690567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.336 qpair failed and we were unable to recover it. 00:36:28.336 [2024-12-16 16:42:16.700490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.336 [2024-12-16 16:42:16.700543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.336 [2024-12-16 16:42:16.700556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.336 [2024-12-16 16:42:16.700563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.336 [2024-12-16 16:42:16.700569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.336 [2024-12-16 16:42:16.700583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.336 qpair failed and we were unable to recover it. 00:36:28.336 [2024-12-16 16:42:16.710429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.336 [2024-12-16 16:42:16.710481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.336 [2024-12-16 16:42:16.710496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.336 [2024-12-16 16:42:16.710503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.336 [2024-12-16 16:42:16.710508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.336 [2024-12-16 16:42:16.710524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.336 qpair failed and we were unable to recover it. 00:36:28.336 [2024-12-16 16:42:16.720589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.336 [2024-12-16 16:42:16.720651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.336 [2024-12-16 16:42:16.720666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.336 [2024-12-16 16:42:16.720675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.336 [2024-12-16 16:42:16.720680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.336 [2024-12-16 16:42:16.720695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.336 qpair failed and we were unable to recover it. 00:36:28.336 [2024-12-16 16:42:16.730571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.336 [2024-12-16 16:42:16.730626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.336 [2024-12-16 16:42:16.730640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.336 [2024-12-16 16:42:16.730646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.336 [2024-12-16 16:42:16.730652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.336 [2024-12-16 16:42:16.730666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.336 qpair failed and we were unable to recover it. 00:36:28.336 [2024-12-16 16:42:16.740593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.336 [2024-12-16 16:42:16.740646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.336 [2024-12-16 16:42:16.740659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.336 [2024-12-16 16:42:16.740666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.336 [2024-12-16 16:42:16.740671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.740685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.750629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.750706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.750720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.750726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.750732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.750746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.760686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.760780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.760794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.760799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.760805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.760825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.770688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.770746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.770759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.770765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.770771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.770785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.780717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.780776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.780790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.780796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.780801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.780816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.790748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.790820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.790834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.790840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.790846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.790860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.800772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.800829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.800843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.800849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.800855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.800869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.810806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.810861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.810875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.810882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.810887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.810901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.820830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.820887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.820901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.820907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.820913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.820926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.830859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.830911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.830925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.830931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.830937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.830951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.840952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.841055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.841069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.841076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.841081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.841099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.850921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.850996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.851013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.851019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.851025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.851039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.860952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.861005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.861019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.861026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.861031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.861046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.871019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.337 [2024-12-16 16:42:16.871082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.337 [2024-12-16 16:42:16.871100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.337 [2024-12-16 16:42:16.871107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.337 [2024-12-16 16:42:16.871113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.337 [2024-12-16 16:42:16.871126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.337 qpair failed and we were unable to recover it. 00:36:28.337 [2024-12-16 16:42:16.880941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.881001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.881015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.881022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.881027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.881041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.338 [2024-12-16 16:42:16.891068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.891130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.891145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.891151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.891160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.891174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.338 [2024-12-16 16:42:16.901084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.901152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.901166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.901173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.901179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.901194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.338 [2024-12-16 16:42:16.911111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.911180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.911195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.911202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.911207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.911222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.338 [2024-12-16 16:42:16.921041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.921098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.921112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.921118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.921124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.921138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.338 [2024-12-16 16:42:16.931147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.931230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.931244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.931251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.931257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.931270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.338 [2024-12-16 16:42:16.941166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.338 [2024-12-16 16:42:16.941220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.338 [2024-12-16 16:42:16.941235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.338 [2024-12-16 16:42:16.941241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.338 [2024-12-16 16:42:16.941247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.338 [2024-12-16 16:42:16.941261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.338 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:16.951186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:16.951240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:16.951254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:16.951260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:16.951266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:16.951280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:16.961247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:16.961321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:16.961334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:16.961341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:16.961347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:16.961361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:16.971301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:16.971358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:16.971372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:16.971378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:16.971384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:16.971398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:16.981273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:16.981329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:16.981346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:16.981353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:16.981358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:16.981372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:16.991308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:16.991361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:16.991375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:16.991381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:16.991386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:16.991400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:17.001304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:17.001379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:17.001393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:17.001399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:17.001405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:17.001419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:17.011314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:17.011376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:17.011392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:17.011398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:17.011404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:17.011418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:17.021446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:17.021499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:17.021513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:17.021519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:17.021528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:17.021543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:17.031445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:17.031502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.598 [2024-12-16 16:42:17.031516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.598 [2024-12-16 16:42:17.031522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.598 [2024-12-16 16:42:17.031527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.598 [2024-12-16 16:42:17.031541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.598 qpair failed and we were unable to recover it. 00:36:28.598 [2024-12-16 16:42:17.041436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.598 [2024-12-16 16:42:17.041491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.041505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.041511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.041517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.041531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.051484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.051539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.051553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.051559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.051565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.051579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.061483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.061541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.061555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.061561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.061566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.061580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.071522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.071580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.071594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.071600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.071606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.071619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.081567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.081627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.081641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.081647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.081653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.081667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.091574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.091649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.091663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.091669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.091675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.091689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.101604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.101698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.101712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.101718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.101724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.101738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.111641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.111697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.111715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.111721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.111727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.111742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.121604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.121659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.121673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.121679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.121685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.121700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.131708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.131765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.131779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.131785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.131791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.131804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.141680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.141767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.141781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.141787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.141793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.141807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.151741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.151795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.151809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.151821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.151827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.151841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.161846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.161950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.161965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.161971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.161977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.599 [2024-12-16 16:42:17.161991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.599 qpair failed and we were unable to recover it. 00:36:28.599 [2024-12-16 16:42:17.171862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.599 [2024-12-16 16:42:17.171921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.599 [2024-12-16 16:42:17.171936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.599 [2024-12-16 16:42:17.171942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.599 [2024-12-16 16:42:17.171948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.600 [2024-12-16 16:42:17.171962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.600 qpair failed and we were unable to recover it. 00:36:28.600 [2024-12-16 16:42:17.181856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.600 [2024-12-16 16:42:17.181946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.600 [2024-12-16 16:42:17.181960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.600 [2024-12-16 16:42:17.181967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.600 [2024-12-16 16:42:17.181972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.600 [2024-12-16 16:42:17.181986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.600 qpair failed and we were unable to recover it. 00:36:28.600 [2024-12-16 16:42:17.191865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.600 [2024-12-16 16:42:17.191920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.600 [2024-12-16 16:42:17.191934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.600 [2024-12-16 16:42:17.191940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.600 [2024-12-16 16:42:17.191945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.600 [2024-12-16 16:42:17.191962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.600 qpair failed and we were unable to recover it. 00:36:28.600 [2024-12-16 16:42:17.201848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.600 [2024-12-16 16:42:17.201908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.600 [2024-12-16 16:42:17.201923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.600 [2024-12-16 16:42:17.201929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.600 [2024-12-16 16:42:17.201935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.600 [2024-12-16 16:42:17.201949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.600 qpair failed and we were unable to recover it. 00:36:28.860 [2024-12-16 16:42:17.211946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.860 [2024-12-16 16:42:17.212012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.860 [2024-12-16 16:42:17.212027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.860 [2024-12-16 16:42:17.212033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.860 [2024-12-16 16:42:17.212039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.860 [2024-12-16 16:42:17.212053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.860 qpair failed and we were unable to recover it. 00:36:28.860 [2024-12-16 16:42:17.221978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.860 [2024-12-16 16:42:17.222046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.860 [2024-12-16 16:42:17.222061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.860 [2024-12-16 16:42:17.222067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.860 [2024-12-16 16:42:17.222073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.860 [2024-12-16 16:42:17.222088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.860 qpair failed and we were unable to recover it. 00:36:28.860 [2024-12-16 16:42:17.231997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.860 [2024-12-16 16:42:17.232053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.860 [2024-12-16 16:42:17.232069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.860 [2024-12-16 16:42:17.232076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.860 [2024-12-16 16:42:17.232081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.860 [2024-12-16 16:42:17.232099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.860 qpair failed and we were unable to recover it. 00:36:28.860 [2024-12-16 16:42:17.242023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.860 [2024-12-16 16:42:17.242082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.242104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.242111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.242117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.242131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.252059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.252125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.252139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.252146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.252152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.252166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.262009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.262072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.262086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.262093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.262103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.262118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.272055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.272115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.272129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.272136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.272142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.272156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.282122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.282215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.282229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.282238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.282244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.282258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.292187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.292261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.292276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.292282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.292288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.292303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.302328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.302430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.302444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.302450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.302456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.302470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.312262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.312317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.312332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.312338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.312344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.312358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.322286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.322343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.322357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.322364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.322369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.322386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.332337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.861 [2024-12-16 16:42:17.332390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.861 [2024-12-16 16:42:17.332404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.861 [2024-12-16 16:42:17.332410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.861 [2024-12-16 16:42:17.332416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.861 [2024-12-16 16:42:17.332430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.861 qpair failed and we were unable to recover it. 00:36:28.861 [2024-12-16 16:42:17.342395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.342452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.342466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.342472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.342478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.342492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.352339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.352391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.352405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.352411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.352417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.352430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.362386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.362478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.362492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.362498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.362504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.362518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.372344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.372407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.372421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.372427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.372433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.372447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.382556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.382662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.382676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.382683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.382689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.382703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.392462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.392517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.392530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.392537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.392542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.392556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.402510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.402584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.402597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.402604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.402610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.402624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.412534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.412586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.412603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.412610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.412615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.412629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.422602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.422660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.422673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.422680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.422686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.422699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.862 [2024-12-16 16:42:17.432617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.862 [2024-12-16 16:42:17.432673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.862 [2024-12-16 16:42:17.432686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.862 [2024-12-16 16:42:17.432693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.862 [2024-12-16 16:42:17.432699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.862 [2024-12-16 16:42:17.432712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.862 qpair failed and we were unable to recover it. 00:36:28.863 [2024-12-16 16:42:17.442665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.863 [2024-12-16 16:42:17.442725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.863 [2024-12-16 16:42:17.442739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.863 [2024-12-16 16:42:17.442745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.863 [2024-12-16 16:42:17.442751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.863 [2024-12-16 16:42:17.442765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.863 qpair failed and we were unable to recover it. 00:36:28.863 [2024-12-16 16:42:17.452573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.863 [2024-12-16 16:42:17.452625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.863 [2024-12-16 16:42:17.452638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.863 [2024-12-16 16:42:17.452644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.863 [2024-12-16 16:42:17.452653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.863 [2024-12-16 16:42:17.452667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.863 qpair failed and we were unable to recover it. 00:36:28.863 [2024-12-16 16:42:17.462674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:28.863 [2024-12-16 16:42:17.462729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:28.863 [2024-12-16 16:42:17.462743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:28.863 [2024-12-16 16:42:17.462749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:28.863 [2024-12-16 16:42:17.462755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:28.863 [2024-12-16 16:42:17.462769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:28.863 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.472694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.472750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.472764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.472771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.472776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.472790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.482726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.482783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.482797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.482803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.482809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.482822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.492761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.492818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.492832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.492838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.492844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.492858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.502791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.502851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.502865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.502871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.502877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.502891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.512858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.512954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.512969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.512975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.512981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.512995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.522834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.522894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.522914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.522922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.522928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.522947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.532858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.532942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.532957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.532964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.532970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.532984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.542936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.542989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.543005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.543012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.543018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.543032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.552931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.552980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.552994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.553000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.123 [2024-12-16 16:42:17.553006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.123 [2024-12-16 16:42:17.553019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.123 qpair failed and we were unable to recover it. 00:36:29.123 [2024-12-16 16:42:17.562966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.123 [2024-12-16 16:42:17.563022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.123 [2024-12-16 16:42:17.563036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.123 [2024-12-16 16:42:17.563042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.563048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.563062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.573011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.573086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.573105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.573111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.573117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.573132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.583036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.583097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.583111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.583118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.583126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.583140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.593036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.593116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.593130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.593136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.593142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.593156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.603082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.603142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.603155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.603162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.603167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.603182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.613102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.613161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.613176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.613182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.613188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.613203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.623120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.623178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.623192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.623199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.623205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.623219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.633150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.633202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.633216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.633222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.633228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.633242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.643227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.643313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.643327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.643333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.643339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.643353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.653216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.653292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.653306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.653312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.653318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.653332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.663299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.663354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.663368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.663374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.663380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.663394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.673283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.673334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.673351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.673358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.673363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.673377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.683320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.683406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.683420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.683426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.683432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.683447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.124 [2024-12-16 16:42:17.693318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.124 [2024-12-16 16:42:17.693374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.124 [2024-12-16 16:42:17.693388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.124 [2024-12-16 16:42:17.693394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.124 [2024-12-16 16:42:17.693400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.124 [2024-12-16 16:42:17.693414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.124 qpair failed and we were unable to recover it. 00:36:29.125 [2024-12-16 16:42:17.703356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.125 [2024-12-16 16:42:17.703412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.125 [2024-12-16 16:42:17.703426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.125 [2024-12-16 16:42:17.703432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.125 [2024-12-16 16:42:17.703438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.125 [2024-12-16 16:42:17.703451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.125 qpair failed and we were unable to recover it. 00:36:29.125 [2024-12-16 16:42:17.713311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.125 [2024-12-16 16:42:17.713365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.125 [2024-12-16 16:42:17.713380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.125 [2024-12-16 16:42:17.713390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.125 [2024-12-16 16:42:17.713396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.125 [2024-12-16 16:42:17.713410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.125 qpair failed and we were unable to recover it. 00:36:29.125 [2024-12-16 16:42:17.723422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.125 [2024-12-16 16:42:17.723523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.125 [2024-12-16 16:42:17.723536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.125 [2024-12-16 16:42:17.723543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.125 [2024-12-16 16:42:17.723549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.125 [2024-12-16 16:42:17.723562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.125 qpair failed and we were unable to recover it. 00:36:29.383 [2024-12-16 16:42:17.733451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.383 [2024-12-16 16:42:17.733505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.383 [2024-12-16 16:42:17.733519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.383 [2024-12-16 16:42:17.733525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.383 [2024-12-16 16:42:17.733531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.383 [2024-12-16 16:42:17.733545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.383 qpair failed and we were unable to recover it. 00:36:29.383 [2024-12-16 16:42:17.743483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.383 [2024-12-16 16:42:17.743538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.383 [2024-12-16 16:42:17.743552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.383 [2024-12-16 16:42:17.743558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.383 [2024-12-16 16:42:17.743564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.383 [2024-12-16 16:42:17.743577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.383 qpair failed and we were unable to recover it. 00:36:29.383 [2024-12-16 16:42:17.753539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.383 [2024-12-16 16:42:17.753591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.383 [2024-12-16 16:42:17.753605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.753611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.753617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.753633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.763528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.763631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.763644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.763651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.763656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.763670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.773488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.773546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.773561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.773567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.773573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.773587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.783608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.783688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.783702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.783708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.783714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.783728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.793649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.793707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.793721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.793727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.793733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.793747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.803642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.803749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.803763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.803769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.803775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.803789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.813707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.813763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.813777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.813784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.813789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.813805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.823701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.823774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.823787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.823794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.823799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.823813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.833766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.833818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.833832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.833838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.833844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.833858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.843792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.843850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.843864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.843874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.843879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.843894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.853792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.853846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.853859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.853866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.853872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.853886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.863859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.863911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.863924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.863931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.863936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.863950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.873879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.873936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.873950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.873956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.873962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.873976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.883880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.883934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.883948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.883955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.883961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.883980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.893905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.893959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.893973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.893980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.893986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.894000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.903941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.903999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.904013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.904019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.904025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.904039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.913961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.914016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.914031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.914037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.914043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.914058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.923957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.924045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.924059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.924066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.924071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.924086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.934019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.934077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.934092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.934103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.934109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.384 [2024-12-16 16:42:17.934123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.384 qpair failed and we were unable to recover it. 00:36:29.384 [2024-12-16 16:42:17.944037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.384 [2024-12-16 16:42:17.944096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.384 [2024-12-16 16:42:17.944110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.384 [2024-12-16 16:42:17.944116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.384 [2024-12-16 16:42:17.944122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.385 [2024-12-16 16:42:17.944136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.385 qpair failed and we were unable to recover it. 00:36:29.385 [2024-12-16 16:42:17.954075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.385 [2024-12-16 16:42:17.954128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.385 [2024-12-16 16:42:17.954143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.385 [2024-12-16 16:42:17.954149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.385 [2024-12-16 16:42:17.954155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.385 [2024-12-16 16:42:17.954168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.385 qpair failed and we were unable to recover it. 00:36:29.385 [2024-12-16 16:42:17.964109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.385 [2024-12-16 16:42:17.964167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.385 [2024-12-16 16:42:17.964181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.385 [2024-12-16 16:42:17.964187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.385 [2024-12-16 16:42:17.964193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.385 [2024-12-16 16:42:17.964207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.385 qpair failed and we were unable to recover it. 00:36:29.385 [2024-12-16 16:42:17.974147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.385 [2024-12-16 16:42:17.974241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.385 [2024-12-16 16:42:17.974258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.385 [2024-12-16 16:42:17.974265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.385 [2024-12-16 16:42:17.974271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.385 [2024-12-16 16:42:17.974285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.385 qpair failed and we were unable to recover it. 00:36:29.385 [2024-12-16 16:42:17.984211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.385 [2024-12-16 16:42:17.984268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.385 [2024-12-16 16:42:17.984282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.385 [2024-12-16 16:42:17.984289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.385 [2024-12-16 16:42:17.984294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.385 [2024-12-16 16:42:17.984308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.385 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:17.994176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:17.994226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:17.994240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:17.994246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:17.994252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:17.994266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.004232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.004304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.004318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.004324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.004330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:18.004344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.014174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.014232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.014247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.014253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.014262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:18.014276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.024346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.024401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.024415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.024421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.024427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:18.024441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.034305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.034358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.034372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.034378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.034384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:18.034398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.044332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.044385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.044398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.044404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.044410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:18.044424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.054413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.054471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.054485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.054491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.054497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.644 [2024-12-16 16:42:18.054511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.644 qpair failed and we were unable to recover it. 00:36:29.644 [2024-12-16 16:42:18.064390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.644 [2024-12-16 16:42:18.064443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.644 [2024-12-16 16:42:18.064456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.644 [2024-12-16 16:42:18.064462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.644 [2024-12-16 16:42:18.064468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.064482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.074428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.074484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.074498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.074504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.074510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.074524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.084459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.084516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.084529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.084536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.084541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.084556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.094480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.094541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.094555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.094561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.094567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.094581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.104521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.104570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.104587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.104593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.104599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.104612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.114528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.114579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.114594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.114600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.114606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.114621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.124546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.124603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.124617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.124624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.124629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.124644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.134594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.134650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.134664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.134670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.134676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.134691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.144621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.144675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.144688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.144695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.144703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.144717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.154710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.154764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.154778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.154785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.154790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.154804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.164727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.164781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.164795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.164801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.164807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.164821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.174651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.174705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.174719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.174725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.174731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.174744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.184705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.184760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.184774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.645 [2024-12-16 16:42:18.184780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.645 [2024-12-16 16:42:18.184786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.645 [2024-12-16 16:42:18.184800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.645 qpair failed and we were unable to recover it. 00:36:29.645 [2024-12-16 16:42:18.194788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.645 [2024-12-16 16:42:18.194846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.645 [2024-12-16 16:42:18.194859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.646 [2024-12-16 16:42:18.194866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.646 [2024-12-16 16:42:18.194871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.646 [2024-12-16 16:42:18.194886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.646 qpair failed and we were unable to recover it. 00:36:29.646 [2024-12-16 16:42:18.204811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.646 [2024-12-16 16:42:18.204899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.646 [2024-12-16 16:42:18.204913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.646 [2024-12-16 16:42:18.204920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.646 [2024-12-16 16:42:18.204925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.646 [2024-12-16 16:42:18.204940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.646 qpair failed and we were unable to recover it. 00:36:29.646 [2024-12-16 16:42:18.214756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.646 [2024-12-16 16:42:18.214849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.646 [2024-12-16 16:42:18.214864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.646 [2024-12-16 16:42:18.214871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.646 [2024-12-16 16:42:18.214877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.646 [2024-12-16 16:42:18.214891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.646 qpair failed and we were unable to recover it. 00:36:29.646 [2024-12-16 16:42:18.224844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.646 [2024-12-16 16:42:18.224925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.646 [2024-12-16 16:42:18.224940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.646 [2024-12-16 16:42:18.224946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.646 [2024-12-16 16:42:18.224952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.646 [2024-12-16 16:42:18.224967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.646 qpair failed and we were unable to recover it. 00:36:29.646 [2024-12-16 16:42:18.234865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.646 [2024-12-16 16:42:18.234919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.646 [2024-12-16 16:42:18.234936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.646 [2024-12-16 16:42:18.234942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.646 [2024-12-16 16:42:18.234948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.646 [2024-12-16 16:42:18.234963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.646 qpair failed and we were unable to recover it. 00:36:29.646 [2024-12-16 16:42:18.244913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.646 [2024-12-16 16:42:18.244977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.646 [2024-12-16 16:42:18.244991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.646 [2024-12-16 16:42:18.244998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.646 [2024-12-16 16:42:18.245003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.646 [2024-12-16 16:42:18.245017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.646 qpair failed and we were unable to recover it. 00:36:29.905 [2024-12-16 16:42:18.254936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.905 [2024-12-16 16:42:18.254990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.905 [2024-12-16 16:42:18.255004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.905 [2024-12-16 16:42:18.255011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.905 [2024-12-16 16:42:18.255016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.905 [2024-12-16 16:42:18.255031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.905 qpair failed and we were unable to recover it. 00:36:29.905 [2024-12-16 16:42:18.265014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.905 [2024-12-16 16:42:18.265074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.905 [2024-12-16 16:42:18.265088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.905 [2024-12-16 16:42:18.265097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.905 [2024-12-16 16:42:18.265104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.905 [2024-12-16 16:42:18.265118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.905 qpair failed and we were unable to recover it. 00:36:29.905 [2024-12-16 16:42:18.274996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.905 [2024-12-16 16:42:18.275077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.905 [2024-12-16 16:42:18.275091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.905 [2024-12-16 16:42:18.275104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.905 [2024-12-16 16:42:18.275110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.905 [2024-12-16 16:42:18.275124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.905 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.285074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.285140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.285154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.285160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.285166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.285180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.295051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.295112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.295126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.295132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.295138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.295153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.305119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.305171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.305185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.305191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.305197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.305211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.315084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.315143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.315158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.315164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.315171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.315188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.325165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.325226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.325240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.325247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.325253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.325267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.335171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.335225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.335239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.335246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.335251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.335265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.345144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.345226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.345240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.345247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.345252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.345266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.355206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.355255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.355269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.355275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.355281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.355295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.365298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.365401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.365415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.365421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.365428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.365442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.375353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.375409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.375424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.375430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.375436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.375450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.385242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.385298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.385312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.385319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.385325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.385340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.395303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.395354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.395368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.395374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.395380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.395393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.405305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.906 [2024-12-16 16:42:18.405389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.906 [2024-12-16 16:42:18.405403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.906 [2024-12-16 16:42:18.405412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.906 [2024-12-16 16:42:18.405418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.906 [2024-12-16 16:42:18.405432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.906 qpair failed and we were unable to recover it. 00:36:29.906 [2024-12-16 16:42:18.415304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.415358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.415373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.415380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.415385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.415400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.425424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.425484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.425498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.425505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.425510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.425525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.435358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.435412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.435426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.435433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.435439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.435453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.445446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.445505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.445518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.445525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.445530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.445548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.455417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.455470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.455484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.455490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.455496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.455510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.465493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.465543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.465557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.465564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.465570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.465584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.475530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.475586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.475600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.475606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.475612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.475626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.485566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.485625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.485639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.485646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.485652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.485666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.495582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.495636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.495650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.495656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.495663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.495676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:29.907 [2024-12-16 16:42:18.505544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:29.907 [2024-12-16 16:42:18.505603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:29.907 [2024-12-16 16:42:18.505617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:29.907 [2024-12-16 16:42:18.505624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:29.907 [2024-12-16 16:42:18.505630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:29.907 [2024-12-16 16:42:18.505643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.907 qpair failed and we were unable to recover it. 00:36:30.167 [2024-12-16 16:42:18.515630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.167 [2024-12-16 16:42:18.515715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.167 [2024-12-16 16:42:18.515730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.167 [2024-12-16 16:42:18.515736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.167 [2024-12-16 16:42:18.515741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.167 [2024-12-16 16:42:18.515756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.167 qpair failed and we were unable to recover it. 00:36:30.167 [2024-12-16 16:42:18.525619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.167 [2024-12-16 16:42:18.525674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.167 [2024-12-16 16:42:18.525688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.167 [2024-12-16 16:42:18.525694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.167 [2024-12-16 16:42:18.525700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.167 [2024-12-16 16:42:18.525714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.167 qpair failed and we were unable to recover it. 00:36:30.167 [2024-12-16 16:42:18.535652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.167 [2024-12-16 16:42:18.535710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.167 [2024-12-16 16:42:18.535726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.167 [2024-12-16 16:42:18.535733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.167 [2024-12-16 16:42:18.535739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.167 [2024-12-16 16:42:18.535753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.167 qpair failed and we were unable to recover it. 00:36:30.167 [2024-12-16 16:42:18.545782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.167 [2024-12-16 16:42:18.545840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.167 [2024-12-16 16:42:18.545854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.167 [2024-12-16 16:42:18.545860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.167 [2024-12-16 16:42:18.545866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.167 [2024-12-16 16:42:18.545880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.167 qpair failed and we were unable to recover it. 00:36:30.167 [2024-12-16 16:42:18.555724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.167 [2024-12-16 16:42:18.555824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.167 [2024-12-16 16:42:18.555837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.167 [2024-12-16 16:42:18.555844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.167 [2024-12-16 16:42:18.555849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.167 [2024-12-16 16:42:18.555863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.167 qpair failed and we were unable to recover it. 00:36:30.167 [2024-12-16 16:42:18.565813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.167 [2024-12-16 16:42:18.565872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.167 [2024-12-16 16:42:18.565886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.167 [2024-12-16 16:42:18.565892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.565897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.565912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.575858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.575914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.575928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.575934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.575945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.575960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.585913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.585966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.585981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.585987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.585993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.586008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.595878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.595943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.595957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.595964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.595969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.595984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.605923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.605983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.605997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.606004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.606010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.606024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.615948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.616004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.616019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.616026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.616032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.616045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.626034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.626093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.626111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.626117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.626123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.626137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.636049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.636113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.636128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.636134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.636140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.636154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.646026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.646083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.646101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.646107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.646113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.646127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.656041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.656102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.656116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.656122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.656128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.656141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.666035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.666090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.666112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.666119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.666125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.666139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.676116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.676172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.676186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.676193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.676198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.676212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.168 [2024-12-16 16:42:18.686132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.168 [2024-12-16 16:42:18.686212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.168 [2024-12-16 16:42:18.686226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.168 [2024-12-16 16:42:18.686232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.168 [2024-12-16 16:42:18.686238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.168 [2024-12-16 16:42:18.686252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.168 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.696179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.696240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.696254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.696260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.696266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.696280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.706213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.706273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.706287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.706293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.706302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.706316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.716244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.716306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.716320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.716327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.716332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.716347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.726286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.726385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.726399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.726405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.726411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.726425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.736244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.736298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.736311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.736318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.736324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.736338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.746260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.746309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.746323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.746329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.746335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.746349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.756346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.756422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.756437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.756443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.756449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.756463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.169 [2024-12-16 16:42:18.766384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.169 [2024-12-16 16:42:18.766436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.169 [2024-12-16 16:42:18.766450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.169 [2024-12-16 16:42:18.766456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.169 [2024-12-16 16:42:18.766462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.169 [2024-12-16 16:42:18.766476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.169 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.776415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.776469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.776482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.776489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.776494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.776509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.786366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.786419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.786433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.786439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.786445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.786459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.796438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.796488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.796505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.796511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.796516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.796530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.806499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.806564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.806577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.806583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.806589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.806604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.816531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.816589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.816603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.816610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.816615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.816630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.826553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.826605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.826619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.826626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.826631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.826645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.836582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.836637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.836651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.836660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.836666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.836680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.846615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.846672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.846686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.846692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.846698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.846712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.856644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.856699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.856713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.856719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.856725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.856739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.866672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.866728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.866742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.866748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.866754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.866768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.876707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.876759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.876774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.876780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.429 [2024-12-16 16:42:18.876786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.429 [2024-12-16 16:42:18.876803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-12-16 16:42:18.886759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.429 [2024-12-16 16:42:18.886865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.429 [2024-12-16 16:42:18.886878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.429 [2024-12-16 16:42:18.886885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.886891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.886905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.896761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.896817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.896831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.896837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.896843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.896857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.906811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.906861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.906874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.906880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.906886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.906900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.916814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.916873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.916887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.916894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.916900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.916914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.926894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.927002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.927016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.927023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.927029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.927043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.936881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.936952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.936968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.936977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.936984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.937000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.946929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.946983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.946997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.947003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.947009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.947023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.956967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.957049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.957063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.957069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.957075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.957089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.966957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.967012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.967026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.967036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.967041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.967055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.976997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.977089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.977108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.977115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.977121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.977135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.987050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.987123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.987137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.987143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.987149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.987165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:18.997046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:18.997100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:18.997115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:18.997121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:18.997127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:18.997141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:19.007081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:19.007142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:19.007156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:19.007163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:19.007169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:19.007186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:19.017166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.430 [2024-12-16 16:42:19.017270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.430 [2024-12-16 16:42:19.017285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.430 [2024-12-16 16:42:19.017292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.430 [2024-12-16 16:42:19.017298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.430 [2024-12-16 16:42:19.017313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-12-16 16:42:19.027145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.431 [2024-12-16 16:42:19.027202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.431 [2024-12-16 16:42:19.027216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.431 [2024-12-16 16:42:19.027223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.431 [2024-12-16 16:42:19.027229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.431 [2024-12-16 16:42:19.027243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.037161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.037214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.037228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.037234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.037240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.037255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.047200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.047281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.047295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.047301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.047307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.047321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.057219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.057275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.057289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.057295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.057301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.057315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.067264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.067319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.067332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.067338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.067344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.067358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.077285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.077339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.077352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.077358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.077364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.077378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.087241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.087308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.087321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.087327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.087333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.087347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.097327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.097401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.690 [2024-12-16 16:42:19.097418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.690 [2024-12-16 16:42:19.097425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.690 [2024-12-16 16:42:19.097431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.690 [2024-12-16 16:42:19.097445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.690 qpair failed and we were unable to recover it. 00:36:30.690 [2024-12-16 16:42:19.107374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.690 [2024-12-16 16:42:19.107427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.107442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.107449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.107454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.107469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.117430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.117486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.117500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.117507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.117513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.117527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.127469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.127528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.127542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.127549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.127555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.127569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.137475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.137529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.137543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.137549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.137558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.137572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.147486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.147536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.147550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.147556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.147562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.147577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.157514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.157572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.157586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.157593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.157598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.157613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.167583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.167685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.167699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.167705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.167711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.167724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.177595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.177654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.177667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.177674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.177679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.177693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.187567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.187671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.187685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.187691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.187697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.187710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.197660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.197710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.197723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.197730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.197735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.197749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.207663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.207717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.207731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.207738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.207743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.207758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.217677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.217761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.691 [2024-12-16 16:42:19.217776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.691 [2024-12-16 16:42:19.217783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.691 [2024-12-16 16:42:19.217788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.691 [2024-12-16 16:42:19.217803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.691 qpair failed and we were unable to recover it. 00:36:30.691 [2024-12-16 16:42:19.227760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.691 [2024-12-16 16:42:19.227815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.227832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.227838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.227844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.227858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.692 [2024-12-16 16:42:19.237791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.692 [2024-12-16 16:42:19.237848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.237862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.237868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.237874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.237888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.692 [2024-12-16 16:42:19.247768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.692 [2024-12-16 16:42:19.247819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.247833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.247840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.247845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.247859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.692 [2024-12-16 16:42:19.257787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.692 [2024-12-16 16:42:19.257841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.257855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.257862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.257868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.257882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.692 [2024-12-16 16:42:19.267813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.692 [2024-12-16 16:42:19.267866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.267880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.267886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.267896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.267910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.692 [2024-12-16 16:42:19.277783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.692 [2024-12-16 16:42:19.277833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.277847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.277853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.277859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.277873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.692 [2024-12-16 16:42:19.287888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.692 [2024-12-16 16:42:19.287941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.692 [2024-12-16 16:42:19.287955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.692 [2024-12-16 16:42:19.287961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.692 [2024-12-16 16:42:19.287967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.692 [2024-12-16 16:42:19.287981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.692 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.297887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.297944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.297958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.297964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.297970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.297985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.307937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.307991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.308004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.308011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.308017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.308032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.318002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.318056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.318071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.318077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.318083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.318101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.327994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.328087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.328105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.328111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.328117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.328132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.338022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.338080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.338099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.338106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.338112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.338127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.348055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.348114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.348128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.348134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.348140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.348154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.358125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.358185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.358199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.358205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.358211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.358225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.368093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.368151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.368164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.368171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.368176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.368190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.378144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.378202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.378217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.378223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.378229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.378243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.388168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.388223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.388237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.388243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.388249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.388263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.398247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.398352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.398365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.398375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.952 [2024-12-16 16:42:19.398381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.952 [2024-12-16 16:42:19.398395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.952 qpair failed and we were unable to recover it. 00:36:30.952 [2024-12-16 16:42:19.408231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.952 [2024-12-16 16:42:19.408292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.952 [2024-12-16 16:42:19.408306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.952 [2024-12-16 16:42:19.408312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.408318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.408332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.418251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.418305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.418320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.418326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.418332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.418346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.428273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.428331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.428345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.428351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.428357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.428371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.438310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.438360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.438373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.438380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.438386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.438403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.448379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.448435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.448449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.448455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.448461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.448474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.458360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.458418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.458431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.458438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.458443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.458457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.468392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.468442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.468456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.468462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.468468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.468482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.478414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.478468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.478482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.478488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.478494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.478508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.488501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.488607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.488621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.488628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.488633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.488647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.498475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.498533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.498547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.498553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.498559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.498573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.508502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.508562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.508575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.508582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.508588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.508601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.518525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.518575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.518590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.953 [2024-12-16 16:42:19.518596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.953 [2024-12-16 16:42:19.518602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.953 [2024-12-16 16:42:19.518617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.953 qpair failed and we were unable to recover it. 00:36:30.953 [2024-12-16 16:42:19.528563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.953 [2024-12-16 16:42:19.528625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.953 [2024-12-16 16:42:19.528639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.954 [2024-12-16 16:42:19.528648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.954 [2024-12-16 16:42:19.528654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.954 [2024-12-16 16:42:19.528668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.954 qpair failed and we were unable to recover it. 00:36:30.954 [2024-12-16 16:42:19.538596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.954 [2024-12-16 16:42:19.538650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.954 [2024-12-16 16:42:19.538663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.954 [2024-12-16 16:42:19.538670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.954 [2024-12-16 16:42:19.538676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.954 [2024-12-16 16:42:19.538689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.954 qpair failed and we were unable to recover it. 00:36:30.954 [2024-12-16 16:42:19.548647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:30.954 [2024-12-16 16:42:19.548706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:30.954 [2024-12-16 16:42:19.548720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:30.954 [2024-12-16 16:42:19.548727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:30.954 [2024-12-16 16:42:19.548733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:30.954 [2024-12-16 16:42:19.548747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.954 qpair failed and we were unable to recover it. 00:36:31.217 [2024-12-16 16:42:19.558661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.217 [2024-12-16 16:42:19.558719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.217 [2024-12-16 16:42:19.558733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.217 [2024-12-16 16:42:19.558739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.217 [2024-12-16 16:42:19.558745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.217 [2024-12-16 16:42:19.558759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.217 qpair failed and we were unable to recover it. 00:36:31.217 [2024-12-16 16:42:19.568668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.217 [2024-12-16 16:42:19.568726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.217 [2024-12-16 16:42:19.568741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.217 [2024-12-16 16:42:19.568747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.217 [2024-12-16 16:42:19.568753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.217 [2024-12-16 16:42:19.568770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.217 qpair failed and we were unable to recover it. 00:36:31.217 [2024-12-16 16:42:19.578634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.217 [2024-12-16 16:42:19.578689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.217 [2024-12-16 16:42:19.578703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.217 [2024-12-16 16:42:19.578710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.217 [2024-12-16 16:42:19.578716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.217 [2024-12-16 16:42:19.578729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.217 qpair failed and we were unable to recover it. 00:36:31.217 [2024-12-16 16:42:19.588747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.217 [2024-12-16 16:42:19.588822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.217 [2024-12-16 16:42:19.588835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.217 [2024-12-16 16:42:19.588842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.217 [2024-12-16 16:42:19.588847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.217 [2024-12-16 16:42:19.588862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.217 qpair failed and we were unable to recover it. 00:36:31.217 [2024-12-16 16:42:19.598758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.217 [2024-12-16 16:42:19.598814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.217 [2024-12-16 16:42:19.598828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.217 [2024-12-16 16:42:19.598835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.217 [2024-12-16 16:42:19.598840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.217 [2024-12-16 16:42:19.598855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.217 qpair failed and we were unable to recover it. 00:36:31.217 [2024-12-16 16:42:19.608794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.217 [2024-12-16 16:42:19.608848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.608862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.608868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.608874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.608888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.618821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.618889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.618905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.618912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.618918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.618933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.628850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.628905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.628919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.628926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.628932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.628946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.638864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.638919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.638932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.638939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.638945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.638959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.648893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.648951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.648965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.648971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.648977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.648991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.658930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.658988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.659004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.659010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.659016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.659030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.668958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.669013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.669027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.669033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.669039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.669052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.678917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.678971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.678986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.678992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.678998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.679012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.689028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.689083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.689101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.689108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.689114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.689128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.699054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.699129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.699143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.699149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.699158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.699174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.709076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.709137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.709151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.709158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.709164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.709178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.719107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.719166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.719180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.719186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.719192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.719207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.729137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.729190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.218 [2024-12-16 16:42:19.729204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.218 [2024-12-16 16:42:19.729211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.218 [2024-12-16 16:42:19.729216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.218 [2024-12-16 16:42:19.729230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.218 qpair failed and we were unable to recover it. 00:36:31.218 [2024-12-16 16:42:19.739160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.218 [2024-12-16 16:42:19.739216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.739230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.739237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.739243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.739257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.749114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.749171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.749186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.749192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.749198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.749212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.759204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.759268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.759281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.759288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.759294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.759308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.769168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.769224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.769239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.769245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.769251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.769265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.779210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.779270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.779284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.779290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.779296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.779310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.789241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.789315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.789332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.789339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.789344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.789360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.799338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.799391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.799405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.799411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.799417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.799431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.809385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.809463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.809477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.809484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.809490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.809503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.219 [2024-12-16 16:42:19.819409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.219 [2024-12-16 16:42:19.819467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.219 [2024-12-16 16:42:19.819482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.219 [2024-12-16 16:42:19.819488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.219 [2024-12-16 16:42:19.819494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.219 [2024-12-16 16:42:19.819508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.219 qpair failed and we were unable to recover it. 00:36:31.516 [2024-12-16 16:42:19.829439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.516 [2024-12-16 16:42:19.829498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.516 [2024-12-16 16:42:19.829512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.516 [2024-12-16 16:42:19.829518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.516 [2024-12-16 16:42:19.829526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.516 [2024-12-16 16:42:19.829541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.516 qpair failed and we were unable to recover it. 00:36:31.516 [2024-12-16 16:42:19.839380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.516 [2024-12-16 16:42:19.839449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.516 [2024-12-16 16:42:19.839463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.516 [2024-12-16 16:42:19.839469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.516 [2024-12-16 16:42:19.839475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.516 [2024-12-16 16:42:19.839489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.516 qpair failed and we were unable to recover it. 00:36:31.516 [2024-12-16 16:42:19.849409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.516 [2024-12-16 16:42:19.849471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.516 [2024-12-16 16:42:19.849486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.516 [2024-12-16 16:42:19.849492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.516 [2024-12-16 16:42:19.849498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.516 [2024-12-16 16:42:19.849512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.516 qpair failed and we were unable to recover it. 00:36:31.516 [2024-12-16 16:42:19.859461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.516 [2024-12-16 16:42:19.859514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.516 [2024-12-16 16:42:19.859528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.516 [2024-12-16 16:42:19.859534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.516 [2024-12-16 16:42:19.859540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.516 [2024-12-16 16:42:19.859554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.516 qpair failed and we were unable to recover it. 00:36:31.516 [2024-12-16 16:42:19.869571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.516 [2024-12-16 16:42:19.869629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.516 [2024-12-16 16:42:19.869643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.516 [2024-12-16 16:42:19.869649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.516 [2024-12-16 16:42:19.869654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.516 [2024-12-16 16:42:19.869668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.516 qpair failed and we were unable to recover it. 00:36:31.516 [2024-12-16 16:42:19.879480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.516 [2024-12-16 16:42:19.879540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.516 [2024-12-16 16:42:19.879553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.516 [2024-12-16 16:42:19.879559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.516 [2024-12-16 16:42:19.879565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.879578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.889594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.889646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.889659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.889666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.889672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.889685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.899633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.899711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.899725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.899731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.899737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.899751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.909650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.909736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.909749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.909755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.909761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.909775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.919609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.919704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.919719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.919725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.919731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.919745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.929744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.929804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.929818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.929824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.929830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.929843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.939760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.939828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.939842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.939848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.939854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.939868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.949750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.949807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.949821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.949827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.949833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.949847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.959732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.959827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.959841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.959850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.959856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.959870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.969822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.969884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.969898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.969905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.969911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.969925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.979877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.979933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.979947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.979953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.979959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.979973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.989932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.989992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.990006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.990012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.990018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.990032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:19.999848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:19.999900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:19.999914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.517 [2024-12-16 16:42:19.999920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.517 [2024-12-16 16:42:19.999926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.517 [2024-12-16 16:42:19.999944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.517 qpair failed and we were unable to recover it. 00:36:31.517 [2024-12-16 16:42:20.009905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.517 [2024-12-16 16:42:20.009986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.517 [2024-12-16 16:42:20.010000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.010007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.010013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.010027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.019967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.020025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.020042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.020050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.020056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.020072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.029964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.030046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.030061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.030068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.030074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.030088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.039953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.040013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.040027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.040034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.040039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.040054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.050084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.050162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.050179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.050186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.050192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.050207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.060082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.060197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.060212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.060219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.060225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.060240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.070084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.070147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.070162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.070169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.070174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.070190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.080070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.080131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.080145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.080152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.080158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.080172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.090214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.090282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.090305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.090312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.090318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.090334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.100146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.100203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.100217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.100223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.100229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.100244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.110248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.110321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.110335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.110342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.110348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.110362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.518 [2024-12-16 16:42:20.120194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.518 [2024-12-16 16:42:20.120250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.518 [2024-12-16 16:42:20.120265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.518 [2024-12-16 16:42:20.120271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.518 [2024-12-16 16:42:20.120277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.518 [2024-12-16 16:42:20.120292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.518 qpair failed and we were unable to recover it. 00:36:31.810 [2024-12-16 16:42:20.130315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.810 [2024-12-16 16:42:20.130374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.810 [2024-12-16 16:42:20.130388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.810 [2024-12-16 16:42:20.130394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.810 [2024-12-16 16:42:20.130400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.810 [2024-12-16 16:42:20.130418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.810 qpair failed and we were unable to recover it. 00:36:31.810 [2024-12-16 16:42:20.140290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.810 [2024-12-16 16:42:20.140364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.810 [2024-12-16 16:42:20.140378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.810 [2024-12-16 16:42:20.140384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.810 [2024-12-16 16:42:20.140390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.810 [2024-12-16 16:42:20.140404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.810 qpair failed and we were unable to recover it. 00:36:31.810 [2024-12-16 16:42:20.150280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.810 [2024-12-16 16:42:20.150343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.810 [2024-12-16 16:42:20.150357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.810 [2024-12-16 16:42:20.150364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.810 [2024-12-16 16:42:20.150369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.810 [2024-12-16 16:42:20.150384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.810 qpair failed and we were unable to recover it. 00:36:31.810 [2024-12-16 16:42:20.160439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.810 [2024-12-16 16:42:20.160495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.810 [2024-12-16 16:42:20.160509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.810 [2024-12-16 16:42:20.160516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.810 [2024-12-16 16:42:20.160522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.160536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.170412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.170469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.170483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.170489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.170495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.170509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.180357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.180411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.180425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.180432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.180438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.180452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.190452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.190508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.190522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.190529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.190534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.190548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.200470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.200522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.200534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.200540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.200546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.200560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.210523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.210577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.210590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.210596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.210602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.210617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.220543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.220599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.220617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.220623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.220629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.220643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.230507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.230568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.230582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.230589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.230594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.230609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.240632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.240689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.240703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.240709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.240715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.240728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.250623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.250684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.250698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.250704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.250710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.250725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.260661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.260719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.260732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.260739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.260747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.260761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.270707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.270763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.270777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.270784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.270790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.270805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.280714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.280763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.280777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.280784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.280789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.280803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.290752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.811 [2024-12-16 16:42:20.290812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.811 [2024-12-16 16:42:20.290826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.811 [2024-12-16 16:42:20.290832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.811 [2024-12-16 16:42:20.290838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.811 [2024-12-16 16:42:20.290852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.811 qpair failed and we were unable to recover it. 00:36:31.811 [2024-12-16 16:42:20.300772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.300826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.300839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.300846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.300852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.300866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.310861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.310913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.310927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.310934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.310940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.310954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.320830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.320879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.320893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.320899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.320905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.320920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.330862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.330918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.330932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.330938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.330944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.330958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.340885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.340941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.340955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.340962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.340967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.340981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.350905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.350963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.350981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.350987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.350993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.351008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.360955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.361013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.361030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.361037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.361044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.361059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.370974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.371028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.371043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.371051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.371056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.371071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.381017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.381077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.381092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.381103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.381109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.381124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.391055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.391119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.391134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.391143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.391149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4bc000b90 00:36:31.812 [2024-12-16 16:42:20.391163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.401263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.401384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.401441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.401466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.401487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4b0000b90 00:36:31.812 [2024-12-16 16:42:20.401538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:31.812 qpair failed and we were unable to recover it. 00:36:31.812 [2024-12-16 16:42:20.411131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:31.812 [2024-12-16 16:42:20.411214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:31.812 [2024-12-16 16:42:20.411243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:31.812 [2024-12-16 16:42:20.411257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:31.812 [2024-12-16 16:42:20.411270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4b0000b90 00:36:31.812 [2024-12-16 16:42:20.411302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:31.812 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-16 16:42:20.421063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.071 [2024-12-16 16:42:20.421146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.071 [2024-12-16 16:42:20.421171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.071 [2024-12-16 16:42:20.421182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.071 [2024-12-16 16:42:20.421191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4b0000b90 00:36:32.071 [2024-12-16 16:42:20.421214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-16 16:42:20.431193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.071 [2024-12-16 16:42:20.431307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.071 [2024-12-16 16:42:20.431364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.071 [2024-12-16 16:42:20.431390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.071 [2024-12-16 16:42:20.431412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4b4000b90 00:36:32.071 [2024-12-16 16:42:20.431461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-16 16:42:20.441174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.071 [2024-12-16 16:42:20.441244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.071 [2024-12-16 16:42:20.441274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.071 [2024-12-16 16:42:20.441288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.071 [2024-12-16 16:42:20.441301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4b4000b90 00:36:32.071 [2024-12-16 16:42:20.441332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-16 16:42:20.441508] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:32.072 A controller has encountered a failure and is being reset. 00:36:32.072 [2024-12-16 16:42:20.451223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.072 [2024-12-16 16:42:20.451333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.072 [2024-12-16 16:42:20.451390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.072 [2024-12-16 16:42:20.451415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.072 [2024-12-16 16:42:20.451435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa35cd0 00:36:32.072 [2024-12-16 16:42:20.451484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-16 16:42:20.461245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.072 [2024-12-16 16:42:20.461326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.072 [2024-12-16 16:42:20.461356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.072 [2024-12-16 16:42:20.461370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.072 [2024-12-16 16:42:20.461383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa35cd0 00:36:32.072 [2024-12-16 16:42:20.461413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 Controller properly reset. 00:36:32.072 Initializing NVMe Controllers 00:36:32.072 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:32.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:32.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:32.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:32.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:32.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:32.072 Initialization complete. Launching workers. 00:36:32.072 Starting thread on core 1 00:36:32.072 Starting thread on core 2 00:36:32.072 Starting thread on core 3 00:36:32.072 Starting thread on core 0 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:32.072 00:36:32.072 real 0m10.908s 00:36:32.072 user 0m19.174s 00:36:32.072 sys 0m4.835s 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.072 ************************************ 00:36:32.072 END TEST nvmf_target_disconnect_tc2 00:36:32.072 ************************************ 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.072 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.072 rmmod nvme_tcp 00:36:32.072 rmmod nvme_fabrics 00:36:32.331 rmmod nvme_keyring 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1209041 ']' 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1209041 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1209041 ']' 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1209041 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209041 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209041' 00:36:32.331 killing process with pid 1209041 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1209041 00:36:32.331 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1209041 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.590 16:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.496 16:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:34.496 00:36:34.496 real 0m19.657s 00:36:34.496 user 0m47.438s 00:36:34.496 sys 0m9.649s 00:36:34.496 16:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.496 16:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:34.496 ************************************ 00:36:34.496 END TEST nvmf_target_disconnect 00:36:34.496 ************************************ 00:36:34.496 16:42:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:34.496 00:36:34.496 real 7m21.788s 00:36:34.496 user 16m52.217s 00:36:34.496 sys 2m8.692s 00:36:34.496 16:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.496 16:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.496 ************************************ 00:36:34.496 END TEST nvmf_host 00:36:34.496 ************************************ 00:36:34.496 16:42:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:34.496 16:42:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:34.496 16:42:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:34.496 16:42:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:34.496 16:42:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.496 16:42:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:34.756 ************************************ 00:36:34.756 START TEST nvmf_target_core_interrupt_mode 00:36:34.756 ************************************ 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:34.756 * Looking for test storage... 00:36:34.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.756 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:34.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.757 --rc genhtml_branch_coverage=1 00:36:34.757 --rc genhtml_function_coverage=1 00:36:34.757 --rc genhtml_legend=1 00:36:34.757 --rc geninfo_all_blocks=1 00:36:34.757 --rc geninfo_unexecuted_blocks=1 00:36:34.757 00:36:34.757 ' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:34.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.757 --rc genhtml_branch_coverage=1 00:36:34.757 --rc genhtml_function_coverage=1 00:36:34.757 --rc genhtml_legend=1 00:36:34.757 --rc geninfo_all_blocks=1 00:36:34.757 --rc geninfo_unexecuted_blocks=1 00:36:34.757 00:36:34.757 ' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:34.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.757 --rc genhtml_branch_coverage=1 00:36:34.757 --rc genhtml_function_coverage=1 00:36:34.757 --rc genhtml_legend=1 00:36:34.757 --rc geninfo_all_blocks=1 00:36:34.757 --rc geninfo_unexecuted_blocks=1 00:36:34.757 00:36:34.757 ' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:34.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.757 --rc genhtml_branch_coverage=1 00:36:34.757 --rc genhtml_function_coverage=1 00:36:34.757 --rc genhtml_legend=1 00:36:34.757 --rc geninfo_all_blocks=1 00:36:34.757 --rc geninfo_unexecuted_blocks=1 00:36:34.757 00:36:34.757 ' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.757 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:35.017 ************************************ 00:36:35.017 START TEST nvmf_abort 00:36:35.017 ************************************ 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:35.017 * Looking for test storage... 00:36:35.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.017 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:35.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.018 --rc genhtml_branch_coverage=1 00:36:35.018 --rc genhtml_function_coverage=1 00:36:35.018 --rc genhtml_legend=1 00:36:35.018 --rc geninfo_all_blocks=1 00:36:35.018 --rc geninfo_unexecuted_blocks=1 00:36:35.018 00:36:35.018 ' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:35.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.018 --rc genhtml_branch_coverage=1 00:36:35.018 --rc genhtml_function_coverage=1 00:36:35.018 --rc genhtml_legend=1 00:36:35.018 --rc geninfo_all_blocks=1 00:36:35.018 --rc geninfo_unexecuted_blocks=1 00:36:35.018 00:36:35.018 ' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:35.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.018 --rc genhtml_branch_coverage=1 00:36:35.018 --rc genhtml_function_coverage=1 00:36:35.018 --rc genhtml_legend=1 00:36:35.018 --rc geninfo_all_blocks=1 00:36:35.018 --rc geninfo_unexecuted_blocks=1 00:36:35.018 00:36:35.018 ' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:35.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.018 --rc genhtml_branch_coverage=1 00:36:35.018 --rc genhtml_function_coverage=1 00:36:35.018 --rc genhtml_legend=1 00:36:35.018 --rc geninfo_all_blocks=1 00:36:35.018 --rc geninfo_unexecuted_blocks=1 00:36:35.018 00:36:35.018 ' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.018 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:35.019 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:35.019 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.019 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.590 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:41.590 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:41.591 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:41.591 Found net devices under 0000:af:00.0: cvl_0_0 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:41.591 Found net devices under 0000:af:00.1: cvl_0_1 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:41.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:41.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:36:41.591 00:36:41.591 --- 10.0.0.2 ping statistics --- 00:36:41.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.591 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:41.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:41.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:36:41.591 00:36:41.591 --- 10.0.0.1 ping statistics --- 00:36:41.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.591 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1213662 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1213662 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1213662 ']' 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.591 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.591 [2024-12-16 16:42:29.528736] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:41.591 [2024-12-16 16:42:29.529641] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:41.591 [2024-12-16 16:42:29.529671] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.592 [2024-12-16 16:42:29.608901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:41.592 [2024-12-16 16:42:29.630614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.592 [2024-12-16 16:42:29.630648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.592 [2024-12-16 16:42:29.630655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.592 [2024-12-16 16:42:29.630661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.592 [2024-12-16 16:42:29.630666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.592 [2024-12-16 16:42:29.631996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:41.592 [2024-12-16 16:42:29.632151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.592 [2024-12-16 16:42:29.632152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:41.592 [2024-12-16 16:42:29.693910] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:41.592 [2024-12-16 16:42:29.694717] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:41.592 [2024-12-16 16:42:29.695052] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:41.592 [2024-12-16 16:42:29.695178] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 [2024-12-16 16:42:29.760820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 Malloc0 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 Delay0 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 [2024-12-16 16:42:29.848764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.592 16:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:41.592 [2024-12-16 16:42:29.972828] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:43.495 Initializing NVMe Controllers 00:36:43.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:43.495 controller IO queue size 128 less than required 00:36:43.495 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:43.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:43.495 Initialization complete. Launching workers. 00:36:43.495 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37245 00:36:43.495 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37306, failed to submit 66 00:36:43.495 success 37245, unsuccessful 61, failed 0 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:43.495 rmmod nvme_tcp 00:36:43.495 rmmod nvme_fabrics 00:36:43.495 rmmod nvme_keyring 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1213662 ']' 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1213662 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1213662 ']' 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1213662 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:43.495 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1213662 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1213662' 00:36:43.754 killing process with pid 1213662 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1213662 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1213662 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.754 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:46.292 00:36:46.292 real 0m11.012s 00:36:46.292 user 0m10.070s 00:36:46.292 sys 0m5.691s 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.292 ************************************ 00:36:46.292 END TEST nvmf_abort 00:36:46.292 ************************************ 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:46.292 ************************************ 00:36:46.292 START TEST nvmf_ns_hotplug_stress 00:36:46.292 ************************************ 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:46.292 * Looking for test storage... 00:36:46.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.292 --rc genhtml_branch_coverage=1 00:36:46.292 --rc genhtml_function_coverage=1 00:36:46.292 --rc genhtml_legend=1 00:36:46.292 --rc geninfo_all_blocks=1 00:36:46.292 --rc geninfo_unexecuted_blocks=1 00:36:46.292 00:36:46.292 ' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.292 --rc genhtml_branch_coverage=1 00:36:46.292 --rc genhtml_function_coverage=1 00:36:46.292 --rc genhtml_legend=1 00:36:46.292 --rc geninfo_all_blocks=1 00:36:46.292 --rc geninfo_unexecuted_blocks=1 00:36:46.292 00:36:46.292 ' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.292 --rc genhtml_branch_coverage=1 00:36:46.292 --rc genhtml_function_coverage=1 00:36:46.292 --rc genhtml_legend=1 00:36:46.292 --rc geninfo_all_blocks=1 00:36:46.292 --rc geninfo_unexecuted_blocks=1 00:36:46.292 00:36:46.292 ' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.292 --rc genhtml_branch_coverage=1 00:36:46.292 --rc genhtml_function_coverage=1 00:36:46.292 --rc genhtml_legend=1 00:36:46.292 --rc geninfo_all_blocks=1 00:36:46.292 --rc geninfo_unexecuted_blocks=1 00:36:46.292 00:36:46.292 ' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.292 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:46.293 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:52.865 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:52.866 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:52.866 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:52.866 Found net devices under 0000:af:00.0: cvl_0_0 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:52.866 Found net devices under 0000:af:00.1: cvl_0_1 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:52.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:52.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:36:52.866 00:36:52.866 --- 10.0.0.2 ping statistics --- 00:36:52.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.866 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:52.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:52.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:36:52.866 00:36:52.866 --- 10.0.0.1 ping statistics --- 00:36:52.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.866 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1217583 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1217583 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:52.866 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1217583 ']' 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:52.867 [2024-12-16 16:42:40.588786] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:52.867 [2024-12-16 16:42:40.589735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:52.867 [2024-12-16 16:42:40.589771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:52.867 [2024-12-16 16:42:40.670706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:52.867 [2024-12-16 16:42:40.692693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:52.867 [2024-12-16 16:42:40.692728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:52.867 [2024-12-16 16:42:40.692735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:52.867 [2024-12-16 16:42:40.692741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:52.867 [2024-12-16 16:42:40.692746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:52.867 [2024-12-16 16:42:40.694046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:52.867 [2024-12-16 16:42:40.694156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.867 [2024-12-16 16:42:40.694156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:52.867 [2024-12-16 16:42:40.756931] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:52.867 [2024-12-16 16:42:40.757872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:52.867 [2024-12-16 16:42:40.758246] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:52.867 [2024-12-16 16:42:40.758364] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:52.867 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:52.867 [2024-12-16 16:42:40.990889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.867 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:52.867 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:52.867 [2024-12-16 16:42:41.355316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.867 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:53.125 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:53.384 Malloc0 00:36:53.384 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:53.384 Delay0 00:36:53.384 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.643 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:53.900 NULL1 00:36:53.900 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:54.161 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1217840 00:36:54.161 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:54.161 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:36:54.161 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.095 Read completed with error (sct=0, sc=11) 00:36:55.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.352 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.352 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:55.352 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:55.610 true 00:36:55.610 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:36:55.610 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.544 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.544 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:56.544 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:56.801 true 00:36:56.801 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:36:56.802 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.058 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.315 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:57.315 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:57.573 true 00:36:57.574 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:36:57.574 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.510 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.769 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:58.769 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:59.026 true 00:36:59.026 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:36:59.026 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.284 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.284 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:59.284 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:59.542 true 00:36:59.542 16:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:36:59.542 16:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:00.915 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:00.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:00.915 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:00.915 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:01.173 true 00:37:01.173 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:01.173 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.107 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.107 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:02.107 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:02.364 true 00:37:02.364 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:02.364 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.622 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.880 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:02.880 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:02.880 true 00:37:02.880 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:02.880 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.072 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.072 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:04.072 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:04.330 true 00:37:04.330 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:04.330 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.588 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.845 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:04.845 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:04.845 true 00:37:05.103 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:05.103 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.036 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.295 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:06.295 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:06.553 true 00:37:06.553 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:06.553 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.485 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.485 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:07.485 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:07.742 true 00:37:07.742 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:07.742 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.000 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.000 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:08.000 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:08.258 true 00:37:08.258 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:08.258 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 16:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.450 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:09.450 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:09.708 true 00:37:09.708 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:09.708 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.641 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.898 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:10.898 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:10.898 true 00:37:10.898 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:10.898 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.156 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.414 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:11.414 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:11.672 true 00:37:11.672 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:11.672 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.614 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.875 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:12.875 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:13.133 true 00:37:13.133 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:13.133 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:14.066 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.066 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:14.066 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:14.324 true 00:37:14.324 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:14.324 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.581 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.839 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:14.839 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:14.839 true 00:37:15.097 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:15.097 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.031 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.288 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:16.288 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:16.546 true 00:37:16.546 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:16.546 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:17.477 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.477 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:17.477 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:17.735 true 00:37:17.735 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:17.735 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.992 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.249 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:18.249 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:18.250 true 00:37:18.250 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:18.250 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 16:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.623 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:19.623 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:19.881 true 00:37:19.881 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:19.881 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.814 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.814 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:20.814 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:21.072 true 00:37:21.072 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:21.072 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.330 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.588 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:21.588 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:21.588 true 00:37:21.588 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:21.588 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.962 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:22.962 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:23.219 true 00:37:23.219 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:23.219 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.152 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.152 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:24.152 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:24.411 Initializing NVMe Controllers 00:37:24.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:24.411 Controller IO queue size 128, less than required. 00:37:24.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:24.411 Controller IO queue size 128, less than required. 00:37:24.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:24.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:24.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:24.411 Initialization complete. Launching workers. 00:37:24.411 ======================================================== 00:37:24.411 Latency(us) 00:37:24.411 Device Information : IOPS MiB/s Average min max 00:37:24.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2234.66 1.09 39799.80 1930.04 1012309.41 00:37:24.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18516.02 9.04 6913.13 1158.78 448162.91 00:37:24.411 ======================================================== 00:37:24.411 Total : 20750.68 10.13 10454.72 1158.78 1012309.41 00:37:24.411 00:37:24.411 true 00:37:24.411 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217840 00:37:24.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1217840) - No such process 00:37:24.411 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1217840 00:37:24.411 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.669 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:24.927 null0 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:24.927 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:25.186 null1 00:37:25.186 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.186 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.186 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:25.445 null2 00:37:25.445 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.445 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.445 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:25.445 null3 00:37:25.445 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.445 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.445 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:25.703 null4 00:37:25.703 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.703 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.704 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:25.962 null5 00:37:25.962 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.962 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.962 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:25.962 null6 00:37:25.962 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.962 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.962 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:26.221 null7 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:26.221 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1223015 1223016 1223018 1223020 1223022 1223024 1223026 1223028 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.222 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:26.481 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.769 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.068 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:27.069 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.069 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.069 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.347 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.611 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:27.611 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:27.612 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:27.870 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.870 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.871 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:28.129 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:28.387 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.387 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.387 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:28.387 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:28.388 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.646 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.646 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:28.647 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:28.647 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.647 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.647 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:28.905 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:29.163 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.421 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.422 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:29.680 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.939 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:30.197 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:30.198 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.456 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:30.457 rmmod nvme_tcp 00:37:30.457 rmmod nvme_fabrics 00:37:30.457 rmmod nvme_keyring 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1217583 ']' 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1217583 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1217583 ']' 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1217583 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:30.457 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217583 00:37:30.457 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:30.457 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:30.457 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217583' 00:37:30.457 killing process with pid 1217583 00:37:30.457 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1217583 00:37:30.457 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1217583 00:37:30.716 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:30.716 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:30.717 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:33.253 00:37:33.253 real 0m46.796s 00:37:33.253 user 2m56.443s 00:37:33.253 sys 0m19.257s 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:33.253 ************************************ 00:37:33.253 END TEST nvmf_ns_hotplug_stress 00:37:33.253 ************************************ 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:33.253 ************************************ 00:37:33.253 START TEST nvmf_delete_subsystem 00:37:33.253 ************************************ 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:33.253 * Looking for test storage... 00:37:33.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:33.253 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.254 --rc genhtml_branch_coverage=1 00:37:33.254 --rc genhtml_function_coverage=1 00:37:33.254 --rc genhtml_legend=1 00:37:33.254 --rc geninfo_all_blocks=1 00:37:33.254 --rc geninfo_unexecuted_blocks=1 00:37:33.254 00:37:33.254 ' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.254 --rc genhtml_branch_coverage=1 00:37:33.254 --rc genhtml_function_coverage=1 00:37:33.254 --rc genhtml_legend=1 00:37:33.254 --rc geninfo_all_blocks=1 00:37:33.254 --rc geninfo_unexecuted_blocks=1 00:37:33.254 00:37:33.254 ' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.254 --rc genhtml_branch_coverage=1 00:37:33.254 --rc genhtml_function_coverage=1 00:37:33.254 --rc genhtml_legend=1 00:37:33.254 --rc geninfo_all_blocks=1 00:37:33.254 --rc geninfo_unexecuted_blocks=1 00:37:33.254 00:37:33.254 ' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.254 --rc genhtml_branch_coverage=1 00:37:33.254 --rc genhtml_function_coverage=1 00:37:33.254 --rc genhtml_legend=1 00:37:33.254 --rc geninfo_all_blocks=1 00:37:33.254 --rc geninfo_unexecuted_blocks=1 00:37:33.254 00:37:33.254 ' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:33.254 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:33.255 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:38.530 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:38.530 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.530 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:38.531 Found net devices under 0000:af:00.0: cvl_0_0 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:38.531 Found net devices under 0000:af:00.1: cvl_0_1 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:38.531 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:38.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:38.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:37:38.790 00:37:38.790 --- 10.0.0.2 ping statistics --- 00:37:38.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.790 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:38.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:38.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:37:38.790 00:37:38.790 --- 10.0.0.1 ping statistics --- 00:37:38.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.790 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:38.790 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1227211 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1227211 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1227211 ']' 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:39.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:39.049 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.049 [2024-12-16 16:43:27.456385] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:39.049 [2024-12-16 16:43:27.457300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:39.049 [2024-12-16 16:43:27.457333] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:39.049 [2024-12-16 16:43:27.536980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:39.049 [2024-12-16 16:43:27.559025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.049 [2024-12-16 16:43:27.559062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.049 [2024-12-16 16:43:27.559069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.049 [2024-12-16 16:43:27.559075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.049 [2024-12-16 16:43:27.559080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.049 [2024-12-16 16:43:27.564116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.049 [2024-12-16 16:43:27.564119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.049 [2024-12-16 16:43:27.627160] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:39.049 [2024-12-16 16:43:27.627212] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:39.049 [2024-12-16 16:43:27.627356] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 [2024-12-16 16:43:27.700772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 [2024-12-16 16:43:27.729203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 NULL1 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 Delay0 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1227335 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:39.308 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:39.308 [2024-12-16 16:43:27.841471] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:41.206 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:41.206 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.206 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Write completed with error (sct=0, sc=8) 00:37:41.464 starting I/O failed: -6 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.464 [2024-12-16 16:43:29.968305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101ef70 is same with the state(6) to be set 00:37:41.464 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 starting I/O failed: -6 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 [2024-12-16 16:43:29.969859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f823000d4d0 is same with the state(6) to be set 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Write completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:41.465 Read completed with error (sct=0, sc=8) 00:37:42.400 [2024-12-16 16:43:30.935972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d190 is same with the state(6) to be set 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 [2024-12-16 16:43:30.972063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f400 is same with the state(6) to be set 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 [2024-12-16 16:43:30.972719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f7c0 is same with the state(6) to be set 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 [2024-12-16 16:43:30.973483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f823000d800 is same with the state(6) to be set 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Read completed with error (sct=0, sc=8) 00:37:42.400 Write completed with error (sct=0, sc=8) 00:37:42.400 [2024-12-16 16:43:30.974122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f823000d060 is same with the state(6) to be set 00:37:42.400 Initializing NVMe Controllers 00:37:42.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:42.400 Controller IO queue size 128, less than required. 00:37:42.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:42.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:42.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:42.400 Initialization complete. Launching workers. 00:37:42.400 ======================================================== 00:37:42.400 Latency(us) 00:37:42.400 Device Information : IOPS MiB/s Average min max 00:37:42.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.92 0.08 907951.50 281.52 1011249.00 00:37:42.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.94 0.08 919206.76 229.91 1043008.99 00:37:42.400 ======================================================== 00:37:42.400 Total : 323.86 0.16 913510.08 229.91 1043008.99 00:37:42.400 00:37:42.400 [2024-12-16 16:43:30.974763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d190 (9): Bad file descriptor 00:37:42.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:42.400 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.400 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:42.400 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1227335 00:37:42.400 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1227335 00:37:42.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1227335) - No such process 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1227335 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1227335 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1227335 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:42.968 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:42.969 [2024-12-16 16:43:31.509067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1227807 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:42.969 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:43.227 [2024-12-16 16:43:31.595814] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:43.484 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:43.484 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:43.484 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:44.048 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:44.048 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:44.048 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:44.613 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:44.613 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:44.613 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:45.178 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:45.178 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:45.178 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:45.743 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:45.743 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:45.743 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:46.000 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:46.000 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:46.000 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:46.567 Initializing NVMe Controllers 00:37:46.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:46.567 Controller IO queue size 128, less than required. 00:37:46.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:46.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:46.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:46.567 Initialization complete. Launching workers. 00:37:46.567 ======================================================== 00:37:46.567 Latency(us) 00:37:46.567 Device Information : IOPS MiB/s Average min max 00:37:46.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002585.56 1000194.18 1041632.28 00:37:46.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006924.56 1000390.73 1042063.97 00:37:46.567 ======================================================== 00:37:46.567 Total : 256.00 0.12 1004755.06 1000194.18 1042063.97 00:37:46.567 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227807 00:37:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1227807) - No such process 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1227807 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:46.567 rmmod nvme_tcp 00:37:46.567 rmmod nvme_fabrics 00:37:46.567 rmmod nvme_keyring 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1227211 ']' 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1227211 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1227211 ']' 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1227211 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1227211 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1227211' 00:37:46.567 killing process with pid 1227211 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1227211 00:37:46.567 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1227211 00:37:46.825 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.826 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.363 00:37:49.363 real 0m16.073s 00:37:49.363 user 0m26.225s 00:37:49.363 sys 0m6.108s 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:49.363 ************************************ 00:37:49.363 END TEST nvmf_delete_subsystem 00:37:49.363 ************************************ 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.363 ************************************ 00:37:49.363 START TEST nvmf_host_management 00:37:49.363 ************************************ 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:49.363 * Looking for test storage... 00:37:49.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:49.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.363 --rc genhtml_branch_coverage=1 00:37:49.363 --rc genhtml_function_coverage=1 00:37:49.363 --rc genhtml_legend=1 00:37:49.363 --rc geninfo_all_blocks=1 00:37:49.363 --rc geninfo_unexecuted_blocks=1 00:37:49.363 00:37:49.363 ' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:49.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.363 --rc genhtml_branch_coverage=1 00:37:49.363 --rc genhtml_function_coverage=1 00:37:49.363 --rc genhtml_legend=1 00:37:49.363 --rc geninfo_all_blocks=1 00:37:49.363 --rc geninfo_unexecuted_blocks=1 00:37:49.363 00:37:49.363 ' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:49.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.363 --rc genhtml_branch_coverage=1 00:37:49.363 --rc genhtml_function_coverage=1 00:37:49.363 --rc genhtml_legend=1 00:37:49.363 --rc geninfo_all_blocks=1 00:37:49.363 --rc geninfo_unexecuted_blocks=1 00:37:49.363 00:37:49.363 ' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:49.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.363 --rc genhtml_branch_coverage=1 00:37:49.363 --rc genhtml_function_coverage=1 00:37:49.363 --rc genhtml_legend=1 00:37:49.363 --rc geninfo_all_blocks=1 00:37:49.363 --rc geninfo_unexecuted_blocks=1 00:37:49.363 00:37:49.363 ' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.363 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.364 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:54.640 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:54.640 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:54.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:54.641 Found net devices under 0000:af:00.0: cvl_0_0 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:54.641 Found net devices under 0000:af:00.1: cvl_0_1 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:54.641 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:54.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:54.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:37:54.901 00:37:54.901 --- 10.0.0.2 ping statistics --- 00:37:54.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.901 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:54.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:54.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:37:54.901 00:37:54.901 --- 10.0.0.1 ping statistics --- 00:37:54.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.901 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:54.901 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1231916 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1231916 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1231916 ']' 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:55.160 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.160 [2024-12-16 16:43:43.611287] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:55.160 [2024-12-16 16:43:43.612297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:55.160 [2024-12-16 16:43:43.612338] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.160 [2024-12-16 16:43:43.692494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:55.160 [2024-12-16 16:43:43.716107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.160 [2024-12-16 16:43:43.716145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.160 [2024-12-16 16:43:43.716152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.160 [2024-12-16 16:43:43.716158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.160 [2024-12-16 16:43:43.716163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.160 [2024-12-16 16:43:43.717715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:55.160 [2024-12-16 16:43:43.717821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:55.160 [2024-12-16 16:43:43.717925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.160 [2024-12-16 16:43:43.717926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:37:55.419 [2024-12-16 16:43:43.781461] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:55.419 [2024-12-16 16:43:43.782598] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:55.419 [2024-12-16 16:43:43.782810] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:55.419 [2024-12-16 16:43:43.783138] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:55.419 [2024-12-16 16:43:43.783176] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.419 [2024-12-16 16:43:43.846664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.419 Malloc0 00:37:55.419 [2024-12-16 16:43:43.934924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1231957 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1231957 /var/tmp/bdevperf.sock 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1231957 ']' 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:55.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:55.419 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:55.420 { 00:37:55.420 "params": { 00:37:55.420 "name": "Nvme$subsystem", 00:37:55.420 "trtype": "$TEST_TRANSPORT", 00:37:55.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.420 "adrfam": "ipv4", 00:37:55.420 "trsvcid": "$NVMF_PORT", 00:37:55.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.420 "hdgst": ${hdgst:-false}, 00:37:55.420 "ddgst": ${ddgst:-false} 00:37:55.420 }, 00:37:55.420 "method": "bdev_nvme_attach_controller" 00:37:55.420 } 00:37:55.420 EOF 00:37:55.420 )") 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:55.420 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:55.420 "params": { 00:37:55.420 "name": "Nvme0", 00:37:55.420 "trtype": "tcp", 00:37:55.420 "traddr": "10.0.0.2", 00:37:55.420 "adrfam": "ipv4", 00:37:55.420 "trsvcid": "4420", 00:37:55.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:55.420 "hdgst": false, 00:37:55.420 "ddgst": false 00:37:55.420 }, 00:37:55.420 "method": "bdev_nvme_attach_controller" 00:37:55.420 }' 00:37:55.678 [2024-12-16 16:43:44.030441] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:55.678 [2024-12-16 16:43:44.030488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231957 ] 00:37:55.678 [2024-12-16 16:43:44.108436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.678 [2024-12-16 16:43:44.130981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.936 Running I/O for 10 seconds... 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=98 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 98 -ge 100 ']' 00:37:55.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:56.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:56.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:56.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:56.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:56.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:56.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:37:56.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:37:56.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:56.454 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:56.454 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:56.454 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:56.454 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.454 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:56.454 [2024-12-16 16:43:44.842481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:56.454 [2024-12-16 16:43:44.842524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.454 [2024-12-16 16:43:44.842534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:56.454 [2024-12-16 16:43:44.842541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.454 [2024-12-16 16:43:44.842548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:56.454 [2024-12-16 16:43:44.842555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.454 [2024-12-16 16:43:44.842563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:56.454 [2024-12-16 16:43:44.842569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.454 [2024-12-16 16:43:44.842577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173490 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.844994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5240 is same with the state(6) to be set 00:37:56.454 [2024-12-16 16:43:44.845224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.454 [2024-12-16 16:43:44.845253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.454 [2024-12-16 16:43:44.845268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.454 [2024-12-16 16:43:44.845275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.455 [2024-12-16 16:43:44.845868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.455 [2024-12-16 16:43:44.845874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.845986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.845993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-12-16 16:43:44.846205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.846213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2186f50 is same with the state(6) to be set 00:37:56.456 [2024-12-16 16:43:44.847165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:56.456 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.456 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:56.456 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.456 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:56.456 task offset: 98304 on job bdev=Nvme0n1 fails 00:37:56.456 00:37:56.456 Latency(us) 00:37:56.456 [2024-12-16T15:43:45.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.456 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:56.456 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:56.456 Verification LBA range: start 0x0 length 0x400 00:37:56.456 Nvme0n1 : 0.41 1894.96 118.43 157.91 0.00 30352.91 3339.22 26963.38 00:37:56.456 [2024-12-16T15:43:45.065Z] =================================================================================================================== 00:37:56.456 [2024-12-16T15:43:45.065Z] Total : 1894.96 118.43 157.91 0.00 30352.91 3339.22 26963.38 00:37:56.456 [2024-12-16 16:43:44.849491] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:56.456 [2024-12-16 16:43:44.849511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173490 (9): Bad file descriptor 00:37:56.456 [2024-12-16 16:43:44.850378] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:56.456 [2024-12-16 16:43:44.850492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:56.456 [2024-12-16 16:43:44.850515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:56.456 [2024-12-16 16:43:44.850529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:56.456 [2024-12-16 16:43:44.850537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:56.456 [2024-12-16 16:43:44.850544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:56.456 [2024-12-16 16:43:44.850551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2173490 00:37:56.456 [2024-12-16 16:43:44.850570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173490 (9): Bad file descriptor 00:37:56.456 [2024-12-16 16:43:44.850581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:56.456 [2024-12-16 16:43:44.850588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:56.456 [2024-12-16 16:43:44.850597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:56.456 [2024-12-16 16:43:44.850605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:56.456 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.456 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1231957 00:37:57.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1231957) - No such process 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:57.386 { 00:37:57.386 "params": { 00:37:57.386 "name": "Nvme$subsystem", 00:37:57.386 "trtype": "$TEST_TRANSPORT", 00:37:57.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:57.386 "adrfam": "ipv4", 00:37:57.386 "trsvcid": "$NVMF_PORT", 00:37:57.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:57.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:57.386 "hdgst": ${hdgst:-false}, 00:37:57.386 "ddgst": ${ddgst:-false} 00:37:57.386 }, 00:37:57.386 "method": "bdev_nvme_attach_controller" 00:37:57.386 } 00:37:57.386 EOF 00:37:57.386 )") 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:57.386 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:57.386 "params": { 00:37:57.386 "name": "Nvme0", 00:37:57.386 "trtype": "tcp", 00:37:57.386 "traddr": "10.0.0.2", 00:37:57.386 "adrfam": "ipv4", 00:37:57.386 "trsvcid": "4420", 00:37:57.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.386 "hdgst": false, 00:37:57.386 "ddgst": false 00:37:57.386 }, 00:37:57.386 "method": "bdev_nvme_attach_controller" 00:37:57.386 }' 00:37:57.386 [2024-12-16 16:43:45.916352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:57.386 [2024-12-16 16:43:45.916400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232204 ] 00:37:57.386 [2024-12-16 16:43:45.990654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.645 [2024-12-16 16:43:46.011750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.903 Running I/O for 1 seconds... 00:37:58.837 2048.00 IOPS, 128.00 MiB/s 00:37:58.837 Latency(us) 00:37:58.837 [2024-12-16T15:43:47.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.837 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:58.837 Verification LBA range: start 0x0 length 0x400 00:37:58.837 Nvme0n1 : 1.02 2061.65 128.85 0.00 0.00 30559.20 5929.45 26838.55 00:37:58.837 [2024-12-16T15:43:47.446Z] =================================================================================================================== 00:37:58.837 [2024-12-16T15:43:47.446Z] Total : 2061.65 128.85 0.00 0.00 30559.20 5929.45 26838.55 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.096 rmmod nvme_tcp 00:37:59.096 rmmod nvme_fabrics 00:37:59.096 rmmod nvme_keyring 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1231916 ']' 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1231916 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1231916 ']' 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1231916 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231916 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231916' 00:37:59.096 killing process with pid 1231916 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1231916 00:37:59.096 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1231916 00:37:59.355 [2024-12-16 16:43:47.793407] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.356 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:01.892 00:38:01.892 real 0m12.427s 00:38:01.892 user 0m18.612s 00:38:01.892 sys 0m6.221s 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:01.892 ************************************ 00:38:01.892 END TEST nvmf_host_management 00:38:01.892 ************************************ 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.892 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:01.892 ************************************ 00:38:01.892 START TEST nvmf_lvol 00:38:01.892 ************************************ 00:38:01.893 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:01.893 * Looking for test storage... 00:38:01.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:01.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.893 --rc genhtml_branch_coverage=1 00:38:01.893 --rc genhtml_function_coverage=1 00:38:01.893 --rc genhtml_legend=1 00:38:01.893 --rc geninfo_all_blocks=1 00:38:01.893 --rc geninfo_unexecuted_blocks=1 00:38:01.893 00:38:01.893 ' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:01.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.893 --rc genhtml_branch_coverage=1 00:38:01.893 --rc genhtml_function_coverage=1 00:38:01.893 --rc genhtml_legend=1 00:38:01.893 --rc geninfo_all_blocks=1 00:38:01.893 --rc geninfo_unexecuted_blocks=1 00:38:01.893 00:38:01.893 ' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:01.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.893 --rc genhtml_branch_coverage=1 00:38:01.893 --rc genhtml_function_coverage=1 00:38:01.893 --rc genhtml_legend=1 00:38:01.893 --rc geninfo_all_blocks=1 00:38:01.893 --rc geninfo_unexecuted_blocks=1 00:38:01.893 00:38:01.893 ' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:01.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.893 --rc genhtml_branch_coverage=1 00:38:01.893 --rc genhtml_function_coverage=1 00:38:01.893 --rc genhtml_legend=1 00:38:01.893 --rc geninfo_all_blocks=1 00:38:01.893 --rc geninfo_unexecuted_blocks=1 00:38:01.893 00:38:01.893 ' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:01.893 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:01.894 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:07.168 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.168 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:07.169 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:07.169 Found net devices under 0000:af:00.0: cvl_0_0 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:07.169 Found net devices under 0000:af:00.1: cvl_0_1 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:07.169 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:07.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:07.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:38:07.429 00:38:07.429 --- 10.0.0.2 ping statistics --- 00:38:07.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.429 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:38:07.429 16:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:07.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:07.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:38:07.429 00:38:07.429 --- 10.0.0.1 ping statistics --- 00:38:07.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.429 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:07.429 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1235893 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1235893 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1235893 ']' 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:07.689 [2024-12-16 16:43:56.102884] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:07.689 [2024-12-16 16:43:56.103857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:07.689 [2024-12-16 16:43:56.103892] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.689 [2024-12-16 16:43:56.182217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:07.689 [2024-12-16 16:43:56.204593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.689 [2024-12-16 16:43:56.204627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.689 [2024-12-16 16:43:56.204634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.689 [2024-12-16 16:43:56.204639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.689 [2024-12-16 16:43:56.204645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.689 [2024-12-16 16:43:56.205887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.689 [2024-12-16 16:43:56.205994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.689 [2024-12-16 16:43:56.205995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:07.689 [2024-12-16 16:43:56.268476] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:07.689 [2024-12-16 16:43:56.269296] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:07.689 [2024-12-16 16:43:56.269454] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:07.689 [2024-12-16 16:43:56.269640] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.689 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:07.948 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.948 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:07.948 [2024-12-16 16:43:56.502752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.948 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:08.207 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:08.207 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:08.465 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:08.465 16:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:08.724 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:08.983 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f3b8f372-781b-44dd-b8c8-5e50ae8be41a 00:38:08.983 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f3b8f372-781b-44dd-b8c8-5e50ae8be41a lvol 20 00:38:09.241 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0a809849-269a-4bd6-a45c-e041bd39a0cd 00:38:09.241 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:09.241 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a809849-269a-4bd6-a45c-e041bd39a0cd 00:38:09.500 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:09.759 [2024-12-16 16:43:58.142618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.759 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:09.759 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1236365 00:38:09.759 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:09.759 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:11.136 16:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0a809849-269a-4bd6-a45c-e041bd39a0cd MY_SNAPSHOT 00:38:11.136 16:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cf4b396c-4487-410d-8e73-4b276b73b513 00:38:11.136 16:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0a809849-269a-4bd6-a45c-e041bd39a0cd 30 00:38:11.395 16:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cf4b396c-4487-410d-8e73-4b276b73b513 MY_CLONE 00:38:11.654 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ac8098bb-d597-4efd-a8c9-4ef4a80e55bd 00:38:11.654 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ac8098bb-d597-4efd-a8c9-4ef4a80e55bd 00:38:11.913 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1236365 00:38:20.095 Initializing NVMe Controllers 00:38:20.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:20.095 Controller IO queue size 128, less than required. 00:38:20.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:20.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:20.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:20.095 Initialization complete. Launching workers. 00:38:20.095 ======================================================== 00:38:20.095 Latency(us) 00:38:20.095 Device Information : IOPS MiB/s Average min max 00:38:20.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12221.10 47.74 10479.07 1788.93 53652.61 00:38:20.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12082.50 47.20 10595.80 3890.11 52109.44 00:38:20.095 ======================================================== 00:38:20.095 Total : 24303.60 94.94 10537.10 1788.93 53652.61 00:38:20.095 00:38:20.095 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:20.354 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a809849-269a-4bd6-a45c-e041bd39a0cd 00:38:20.613 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3b8f372-781b-44dd-b8c8-5e50ae8be41a 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:20.872 rmmod nvme_tcp 00:38:20.872 rmmod nvme_fabrics 00:38:20.872 rmmod nvme_keyring 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1235893 ']' 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1235893 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1235893 ']' 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1235893 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1235893 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1235893' 00:38:20.872 killing process with pid 1235893 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1235893 00:38:20.872 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1235893 00:38:21.131 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:21.132 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.038 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:23.038 00:38:23.038 real 0m21.659s 00:38:23.038 user 0m55.307s 00:38:23.038 sys 0m9.680s 00:38:23.038 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.038 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:23.038 ************************************ 00:38:23.038 END TEST nvmf_lvol 00:38:23.038 ************************************ 00:38:23.297 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:23.298 ************************************ 00:38:23.298 START TEST nvmf_lvs_grow 00:38:23.298 ************************************ 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:23.298 * Looking for test storage... 00:38:23.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.298 --rc genhtml_branch_coverage=1 00:38:23.298 --rc genhtml_function_coverage=1 00:38:23.298 --rc genhtml_legend=1 00:38:23.298 --rc geninfo_all_blocks=1 00:38:23.298 --rc geninfo_unexecuted_blocks=1 00:38:23.298 00:38:23.298 ' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.298 --rc genhtml_branch_coverage=1 00:38:23.298 --rc genhtml_function_coverage=1 00:38:23.298 --rc genhtml_legend=1 00:38:23.298 --rc geninfo_all_blocks=1 00:38:23.298 --rc geninfo_unexecuted_blocks=1 00:38:23.298 00:38:23.298 ' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.298 --rc genhtml_branch_coverage=1 00:38:23.298 --rc genhtml_function_coverage=1 00:38:23.298 --rc genhtml_legend=1 00:38:23.298 --rc geninfo_all_blocks=1 00:38:23.298 --rc geninfo_unexecuted_blocks=1 00:38:23.298 00:38:23.298 ' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:23.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.298 --rc genhtml_branch_coverage=1 00:38:23.298 --rc genhtml_function_coverage=1 00:38:23.298 --rc genhtml_legend=1 00:38:23.298 --rc geninfo_all_blocks=1 00:38:23.298 --rc geninfo_unexecuted_blocks=1 00:38:23.298 00:38:23.298 ' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.298 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.299 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.558 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:23.558 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:23.558 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:23.558 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:30.130 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:30.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:30.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:30.131 Found net devices under 0000:af:00.0: cvl_0_0 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:30.131 Found net devices under 0000:af:00.1: cvl_0_1 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:30.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:30.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:38:30.131 00:38:30.131 --- 10.0.0.2 ping statistics --- 00:38:30.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.131 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:30.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:30.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:38:30.131 00:38:30.131 --- 10.0.0.1 ping statistics --- 00:38:30.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.131 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:30.131 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1241467 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1241467 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1241467 ']' 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:30.132 16:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:30.132 [2024-12-16 16:44:17.888698] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:30.132 [2024-12-16 16:44:17.889686] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:30.132 [2024-12-16 16:44:17.889724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:30.132 [2024-12-16 16:44:17.967312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.132 [2024-12-16 16:44:17.989541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:30.132 [2024-12-16 16:44:17.989577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:30.132 [2024-12-16 16:44:17.989584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:30.132 [2024-12-16 16:44:17.989590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:30.132 [2024-12-16 16:44:17.989596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:30.132 [2024-12-16 16:44:17.990076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.132 [2024-12-16 16:44:18.053115] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:30.132 [2024-12-16 16:44:18.053318] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:30.132 [2024-12-16 16:44:18.282727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:30.132 ************************************ 00:38:30.132 START TEST lvs_grow_clean 00:38:30.132 ************************************ 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:30.132 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:30.391 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:30.391 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:30.391 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:30.391 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:30.391 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:30.391 16:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6e183a39-9301-4c38-aaa3-0e937595a6ab lvol 150 00:38:30.649 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=39cfd05b-e6a3-4223-a13b-5cbb588355b6 00:38:30.649 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:30.649 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:30.908 [2024-12-16 16:44:19.326467] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:30.908 [2024-12-16 16:44:19.326594] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:30.908 true 00:38:30.909 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:30.909 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:31.168 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:31.168 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:31.168 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39cfd05b-e6a3-4223-a13b-5cbb588355b6 00:38:31.426 16:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:31.686 [2024-12-16 16:44:20.066917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1241885 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1241885 /var/tmp/bdevperf.sock 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1241885 ']' 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:31.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:31.686 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:31.945 [2024-12-16 16:44:20.328737] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:31.945 [2024-12-16 16:44:20.328784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241885 ] 00:38:31.945 [2024-12-16 16:44:20.403762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.945 [2024-12-16 16:44:20.425903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.945 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:31.945 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:31.945 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:32.514 Nvme0n1 00:38:32.514 16:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:32.514 [ 00:38:32.514 { 00:38:32.514 "name": "Nvme0n1", 00:38:32.514 "aliases": [ 00:38:32.514 "39cfd05b-e6a3-4223-a13b-5cbb588355b6" 00:38:32.514 ], 00:38:32.514 "product_name": "NVMe disk", 00:38:32.514 "block_size": 4096, 00:38:32.514 "num_blocks": 38912, 00:38:32.514 "uuid": "39cfd05b-e6a3-4223-a13b-5cbb588355b6", 00:38:32.514 "numa_id": 1, 00:38:32.514 "assigned_rate_limits": { 00:38:32.514 "rw_ios_per_sec": 0, 00:38:32.514 "rw_mbytes_per_sec": 0, 00:38:32.514 "r_mbytes_per_sec": 0, 00:38:32.514 "w_mbytes_per_sec": 0 00:38:32.514 }, 00:38:32.514 "claimed": false, 00:38:32.514 "zoned": false, 00:38:32.514 "supported_io_types": { 00:38:32.514 "read": true, 00:38:32.514 "write": true, 00:38:32.514 "unmap": true, 00:38:32.514 "flush": true, 00:38:32.514 "reset": true, 00:38:32.514 "nvme_admin": true, 00:38:32.514 "nvme_io": true, 00:38:32.514 "nvme_io_md": false, 00:38:32.514 "write_zeroes": true, 00:38:32.514 "zcopy": false, 00:38:32.514 "get_zone_info": false, 00:38:32.514 "zone_management": false, 00:38:32.514 "zone_append": false, 00:38:32.514 "compare": true, 00:38:32.514 "compare_and_write": true, 00:38:32.514 "abort": true, 00:38:32.514 "seek_hole": false, 00:38:32.514 "seek_data": false, 00:38:32.514 "copy": true, 00:38:32.514 "nvme_iov_md": false 00:38:32.514 }, 00:38:32.514 "memory_domains": [ 00:38:32.514 { 00:38:32.514 "dma_device_id": "system", 00:38:32.514 "dma_device_type": 1 00:38:32.514 } 00:38:32.514 ], 00:38:32.514 "driver_specific": { 00:38:32.514 "nvme": [ 00:38:32.514 { 00:38:32.514 "trid": { 00:38:32.514 "trtype": "TCP", 00:38:32.514 "adrfam": "IPv4", 00:38:32.514 "traddr": "10.0.0.2", 00:38:32.514 "trsvcid": "4420", 00:38:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:32.514 }, 00:38:32.514 "ctrlr_data": { 00:38:32.514 "cntlid": 1, 00:38:32.514 "vendor_id": "0x8086", 00:38:32.514 "model_number": "SPDK bdev Controller", 00:38:32.514 "serial_number": "SPDK0", 00:38:32.514 "firmware_revision": "25.01", 00:38:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.514 "oacs": { 00:38:32.514 "security": 0, 00:38:32.514 "format": 0, 00:38:32.514 "firmware": 0, 00:38:32.514 "ns_manage": 0 00:38:32.514 }, 00:38:32.514 "multi_ctrlr": true, 00:38:32.514 "ana_reporting": false 00:38:32.514 }, 00:38:32.514 "vs": { 00:38:32.514 "nvme_version": "1.3" 00:38:32.514 }, 00:38:32.514 "ns_data": { 00:38:32.514 "id": 1, 00:38:32.514 "can_share": true 00:38:32.514 } 00:38:32.514 } 00:38:32.514 ], 00:38:32.514 "mp_policy": "active_passive" 00:38:32.514 } 00:38:32.514 } 00:38:32.514 ] 00:38:32.514 16:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1242108 00:38:32.514 16:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:32.514 16:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:32.772 Running I/O for 10 seconds... 00:38:33.708 Latency(us) 00:38:33.708 [2024-12-16T15:44:22.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.708 Nvme0n1 : 1.00 22670.00 88.55 0.00 0.00 0.00 0.00 0.00 00:38:33.708 [2024-12-16T15:44:22.317Z] =================================================================================================================== 00:38:33.708 [2024-12-16T15:44:22.317Z] Total : 22670.00 88.55 0.00 0.00 0.00 0.00 0.00 00:38:33.708 00:38:34.646 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:34.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:34.646 Nvme0n1 : 2.00 23067.50 90.11 0.00 0.00 0.00 0.00 0.00 00:38:34.646 [2024-12-16T15:44:23.255Z] =================================================================================================================== 00:38:34.646 [2024-12-16T15:44:23.255Z] Total : 23067.50 90.11 0.00 0.00 0.00 0.00 0.00 00:38:34.646 00:38:34.905 true 00:38:34.906 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:34.906 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:34.906 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:34.906 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:34.906 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1242108 00:38:35.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.843 Nvme0n1 : 3.00 23210.00 90.66 0.00 0.00 0.00 0.00 0.00 00:38:35.843 [2024-12-16T15:44:24.452Z] =================================================================================================================== 00:38:35.843 [2024-12-16T15:44:24.452Z] Total : 23210.00 90.66 0.00 0.00 0.00 0.00 0.00 00:38:35.843 00:38:36.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.780 Nvme0n1 : 4.00 23313.00 91.07 0.00 0.00 0.00 0.00 0.00 00:38:36.780 [2024-12-16T15:44:25.389Z] =================================================================================================================== 00:38:36.780 [2024-12-16T15:44:25.389Z] Total : 23313.00 91.07 0.00 0.00 0.00 0.00 0.00 00:38:36.780 00:38:37.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.716 Nvme0n1 : 5.00 23400.20 91.41 0.00 0.00 0.00 0.00 0.00 00:38:37.716 [2024-12-16T15:44:26.325Z] =================================================================================================================== 00:38:37.716 [2024-12-16T15:44:26.325Z] Total : 23400.20 91.41 0.00 0.00 0.00 0.00 0.00 00:38:37.716 00:38:38.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:38.652 Nvme0n1 : 6.00 23437.17 91.55 0.00 0.00 0.00 0.00 0.00 00:38:38.652 [2024-12-16T15:44:27.261Z] =================================================================================================================== 00:38:38.652 [2024-12-16T15:44:27.261Z] Total : 23437.17 91.55 0.00 0.00 0.00 0.00 0.00 00:38:38.652 00:38:39.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:39.589 Nvme0n1 : 7.00 23481.71 91.73 0.00 0.00 0.00 0.00 0.00 00:38:39.589 [2024-12-16T15:44:28.198Z] =================================================================================================================== 00:38:39.589 [2024-12-16T15:44:28.198Z] Total : 23481.71 91.73 0.00 0.00 0.00 0.00 0.00 00:38:39.589 00:38:40.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:40.967 Nvme0n1 : 8.00 23515.12 91.86 0.00 0.00 0.00 0.00 0.00 00:38:40.967 [2024-12-16T15:44:29.576Z] =================================================================================================================== 00:38:40.967 [2024-12-16T15:44:29.576Z] Total : 23515.12 91.86 0.00 0.00 0.00 0.00 0.00 00:38:40.967 00:38:41.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:41.904 Nvme0n1 : 9.00 23516.67 91.86 0.00 0.00 0.00 0.00 0.00 00:38:41.904 [2024-12-16T15:44:30.513Z] =================================================================================================================== 00:38:41.904 [2024-12-16T15:44:30.513Z] Total : 23516.67 91.86 0.00 0.00 0.00 0.00 0.00 00:38:41.904 00:38:42.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.841 Nvme0n1 : 10.00 23489.10 91.75 0.00 0.00 0.00 0.00 0.00 00:38:42.841 [2024-12-16T15:44:31.450Z] =================================================================================================================== 00:38:42.841 [2024-12-16T15:44:31.450Z] Total : 23489.10 91.75 0.00 0.00 0.00 0.00 0.00 00:38:42.841 00:38:42.841 00:38:42.841 Latency(us) 00:38:42.841 [2024-12-16T15:44:31.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.841 Nvme0n1 : 10.00 23487.75 91.75 0.00 0.00 5446.24 2995.93 27462.70 00:38:42.841 [2024-12-16T15:44:31.450Z] =================================================================================================================== 00:38:42.841 [2024-12-16T15:44:31.450Z] Total : 23487.75 91.75 0.00 0.00 5446.24 2995.93 27462.70 00:38:42.841 { 00:38:42.841 "results": [ 00:38:42.841 { 00:38:42.841 "job": "Nvme0n1", 00:38:42.841 "core_mask": "0x2", 00:38:42.841 "workload": "randwrite", 00:38:42.841 "status": "finished", 00:38:42.841 "queue_depth": 128, 00:38:42.841 "io_size": 4096, 00:38:42.841 "runtime": 10.003344, 00:38:42.841 "iops": 23487.745697838644, 00:38:42.841 "mibps": 91.7490066321822, 00:38:42.841 "io_failed": 0, 00:38:42.841 "io_timeout": 0, 00:38:42.841 "avg_latency_us": 5446.242287958272, 00:38:42.841 "min_latency_us": 2995.9314285714286, 00:38:42.841 "max_latency_us": 27462.704761904763 00:38:42.841 } 00:38:42.841 ], 00:38:42.841 "core_count": 1 00:38:42.841 } 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1241885 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1241885 ']' 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1241885 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241885 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:42.841 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241885' 00:38:42.841 killing process with pid 1241885 00:38:42.842 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1241885 00:38:42.842 Received shutdown signal, test time was about 10.000000 seconds 00:38:42.842 00:38:42.842 Latency(us) 00:38:42.842 [2024-12-16T15:44:31.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.842 [2024-12-16T15:44:31.451Z] =================================================================================================================== 00:38:42.842 [2024-12-16T15:44:31.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:42.842 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1241885 00:38:42.842 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:43.100 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:43.359 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:43.359 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:43.619 16:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:43.619 [2024-12-16 16:44:32.166530] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:43.619 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:43.879 request: 00:38:43.879 { 00:38:43.879 "uuid": "6e183a39-9301-4c38-aaa3-0e937595a6ab", 00:38:43.879 "method": "bdev_lvol_get_lvstores", 00:38:43.879 "req_id": 1 00:38:43.879 } 00:38:43.879 Got JSON-RPC error response 00:38:43.879 response: 00:38:43.879 { 00:38:43.879 "code": -19, 00:38:43.879 "message": "No such device" 00:38:43.879 } 00:38:43.879 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:43.879 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:43.879 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:43.879 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:43.879 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:44.138 aio_bdev 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 39cfd05b-e6a3-4223-a13b-5cbb588355b6 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=39cfd05b-e6a3-4223-a13b-5cbb588355b6 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:44.138 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:44.397 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 39cfd05b-e6a3-4223-a13b-5cbb588355b6 -t 2000 00:38:44.397 [ 00:38:44.397 { 00:38:44.397 "name": "39cfd05b-e6a3-4223-a13b-5cbb588355b6", 00:38:44.397 "aliases": [ 00:38:44.397 "lvs/lvol" 00:38:44.397 ], 00:38:44.397 "product_name": "Logical Volume", 00:38:44.397 "block_size": 4096, 00:38:44.397 "num_blocks": 38912, 00:38:44.397 "uuid": "39cfd05b-e6a3-4223-a13b-5cbb588355b6", 00:38:44.397 "assigned_rate_limits": { 00:38:44.397 "rw_ios_per_sec": 0, 00:38:44.397 "rw_mbytes_per_sec": 0, 00:38:44.397 "r_mbytes_per_sec": 0, 00:38:44.397 "w_mbytes_per_sec": 0 00:38:44.397 }, 00:38:44.397 "claimed": false, 00:38:44.397 "zoned": false, 00:38:44.397 "supported_io_types": { 00:38:44.397 "read": true, 00:38:44.397 "write": true, 00:38:44.397 "unmap": true, 00:38:44.397 "flush": false, 00:38:44.397 "reset": true, 00:38:44.397 "nvme_admin": false, 00:38:44.397 "nvme_io": false, 00:38:44.397 "nvme_io_md": false, 00:38:44.397 "write_zeroes": true, 00:38:44.397 "zcopy": false, 00:38:44.397 "get_zone_info": false, 00:38:44.397 "zone_management": false, 00:38:44.397 "zone_append": false, 00:38:44.397 "compare": false, 00:38:44.397 "compare_and_write": false, 00:38:44.397 "abort": false, 00:38:44.397 "seek_hole": true, 00:38:44.397 "seek_data": true, 00:38:44.397 "copy": false, 00:38:44.397 "nvme_iov_md": false 00:38:44.397 }, 00:38:44.397 "driver_specific": { 00:38:44.397 "lvol": { 00:38:44.397 "lvol_store_uuid": "6e183a39-9301-4c38-aaa3-0e937595a6ab", 00:38:44.397 "base_bdev": "aio_bdev", 00:38:44.397 "thin_provision": false, 00:38:44.397 "num_allocated_clusters": 38, 00:38:44.397 "snapshot": false, 00:38:44.397 "clone": false, 00:38:44.397 "esnap_clone": false 00:38:44.397 } 00:38:44.397 } 00:38:44.397 } 00:38:44.397 ] 00:38:44.397 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:44.397 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:44.397 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:44.657 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:44.657 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:44.657 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:44.916 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:44.916 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39cfd05b-e6a3-4223-a13b-5cbb588355b6 00:38:45.175 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e183a39-9301-4c38-aaa3-0e937595a6ab 00:38:45.434 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:45.434 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:45.434 00:38:45.434 real 0m15.690s 00:38:45.434 user 0m15.225s 00:38:45.434 sys 0m1.490s 00:38:45.434 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.434 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:45.434 ************************************ 00:38:45.434 END TEST lvs_grow_clean 00:38:45.434 ************************************ 00:38:45.693 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:45.693 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:45.693 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:45.694 ************************************ 00:38:45.694 START TEST lvs_grow_dirty 00:38:45.694 ************************************ 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:45.694 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:45.952 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:45.952 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:45.952 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:38:45.952 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:38:45.952 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:46.212 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:46.212 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:46.212 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d lvol 150 00:38:46.471 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=20d377e1-151a-4723-a24d-15dcaf3474a4 00:38:46.471 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:46.471 16:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:46.729 [2024-12-16 16:44:35.078468] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:46.729 [2024-12-16 16:44:35.078596] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:46.729 true 00:38:46.729 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:38:46.729 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:46.729 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:46.729 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:46.987 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20d377e1-151a-4723-a24d-15dcaf3474a4 00:38:47.245 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:47.245 [2024-12-16 16:44:35.826872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:47.245 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1244397 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1244397 /var/tmp/bdevperf.sock 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1244397 ']' 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:47.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.504 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:47.504 [2024-12-16 16:44:36.056898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:47.504 [2024-12-16 16:44:36.056944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244397 ] 00:38:47.763 [2024-12-16 16:44:36.130791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.763 [2024-12-16 16:44:36.152800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.763 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:47.763 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:47.763 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:48.022 Nvme0n1 00:38:48.282 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:48.282 [ 00:38:48.282 { 00:38:48.282 "name": "Nvme0n1", 00:38:48.282 "aliases": [ 00:38:48.282 "20d377e1-151a-4723-a24d-15dcaf3474a4" 00:38:48.282 ], 00:38:48.282 "product_name": "NVMe disk", 00:38:48.282 "block_size": 4096, 00:38:48.282 "num_blocks": 38912, 00:38:48.282 "uuid": "20d377e1-151a-4723-a24d-15dcaf3474a4", 00:38:48.282 "numa_id": 1, 00:38:48.282 "assigned_rate_limits": { 00:38:48.282 "rw_ios_per_sec": 0, 00:38:48.282 "rw_mbytes_per_sec": 0, 00:38:48.282 "r_mbytes_per_sec": 0, 00:38:48.282 "w_mbytes_per_sec": 0 00:38:48.282 }, 00:38:48.282 "claimed": false, 00:38:48.282 "zoned": false, 00:38:48.282 "supported_io_types": { 00:38:48.282 "read": true, 00:38:48.282 "write": true, 00:38:48.282 "unmap": true, 00:38:48.282 "flush": true, 00:38:48.282 "reset": true, 00:38:48.282 "nvme_admin": true, 00:38:48.282 "nvme_io": true, 00:38:48.282 "nvme_io_md": false, 00:38:48.282 "write_zeroes": true, 00:38:48.282 "zcopy": false, 00:38:48.282 "get_zone_info": false, 00:38:48.282 "zone_management": false, 00:38:48.282 "zone_append": false, 00:38:48.282 "compare": true, 00:38:48.282 "compare_and_write": true, 00:38:48.282 "abort": true, 00:38:48.282 "seek_hole": false, 00:38:48.282 "seek_data": false, 00:38:48.282 "copy": true, 00:38:48.282 "nvme_iov_md": false 00:38:48.282 }, 00:38:48.282 "memory_domains": [ 00:38:48.282 { 00:38:48.282 "dma_device_id": "system", 00:38:48.282 "dma_device_type": 1 00:38:48.282 } 00:38:48.282 ], 00:38:48.282 "driver_specific": { 00:38:48.282 "nvme": [ 00:38:48.282 { 00:38:48.282 "trid": { 00:38:48.282 "trtype": "TCP", 00:38:48.282 "adrfam": "IPv4", 00:38:48.282 "traddr": "10.0.0.2", 00:38:48.282 "trsvcid": "4420", 00:38:48.282 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:48.282 }, 00:38:48.282 "ctrlr_data": { 00:38:48.282 "cntlid": 1, 00:38:48.282 "vendor_id": "0x8086", 00:38:48.282 "model_number": "SPDK bdev Controller", 00:38:48.282 "serial_number": "SPDK0", 00:38:48.282 "firmware_revision": "25.01", 00:38:48.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:48.282 "oacs": { 00:38:48.282 "security": 0, 00:38:48.282 "format": 0, 00:38:48.282 "firmware": 0, 00:38:48.282 "ns_manage": 0 00:38:48.282 }, 00:38:48.282 "multi_ctrlr": true, 00:38:48.282 "ana_reporting": false 00:38:48.282 }, 00:38:48.282 "vs": { 00:38:48.282 "nvme_version": "1.3" 00:38:48.282 }, 00:38:48.282 "ns_data": { 00:38:48.282 "id": 1, 00:38:48.282 "can_share": true 00:38:48.282 } 00:38:48.282 } 00:38:48.282 ], 00:38:48.282 "mp_policy": "active_passive" 00:38:48.282 } 00:38:48.282 } 00:38:48.282 ] 00:38:48.282 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1244614 00:38:48.282 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:48.282 16:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:48.541 Running I/O for 10 seconds... 00:38:49.478 Latency(us) 00:38:49.478 [2024-12-16T15:44:38.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:49.478 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:49.478 [2024-12-16T15:44:38.087Z] =================================================================================================================== 00:38:49.478 [2024-12-16T15:44:38.087Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:49.478 00:38:50.413 16:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:38:50.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:50.413 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:50.413 [2024-12-16T15:44:39.022Z] =================================================================================================================== 00:38:50.413 [2024-12-16T15:44:39.022Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:50.413 00:38:50.673 true 00:38:50.673 16:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:38:50.673 16:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:50.673 16:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:50.673 16:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:50.673 16:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1244614 00:38:51.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:51.610 Nvme0n1 : 3.00 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:38:51.610 [2024-12-16T15:44:40.219Z] =================================================================================================================== 00:38:51.610 [2024-12-16T15:44:40.219Z] Total : 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:38:51.610 00:38:52.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:52.548 Nvme0n1 : 4.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:38:52.548 [2024-12-16T15:44:41.157Z] =================================================================================================================== 00:38:52.548 [2024-12-16T15:44:41.157Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:38:52.548 00:38:53.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:53.492 Nvme0n1 : 5.00 23266.40 90.88 0.00 0.00 0.00 0.00 0.00 00:38:53.492 [2024-12-16T15:44:42.101Z] =================================================================================================================== 00:38:53.492 [2024-12-16T15:44:42.101Z] Total : 23266.40 90.88 0.00 0.00 0.00 0.00 0.00 00:38:53.492 00:38:54.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:54.427 Nvme0n1 : 6.00 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:38:54.427 [2024-12-16T15:44:43.036Z] =================================================================================================================== 00:38:54.427 [2024-12-16T15:44:43.036Z] Total : 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:38:54.427 00:38:55.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:55.364 Nvme0n1 : 7.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:55.364 [2024-12-16T15:44:43.973Z] =================================================================================================================== 00:38:55.364 [2024-12-16T15:44:43.973Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:55.364 00:38:56.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:56.742 Nvme0n1 : 8.00 23415.62 91.47 0.00 0.00 0.00 0.00 0.00 00:38:56.742 [2024-12-16T15:44:45.351Z] =================================================================================================================== 00:38:56.742 [2024-12-16T15:44:45.351Z] Total : 23415.62 91.47 0.00 0.00 0.00 0.00 0.00 00:38:56.742 00:38:57.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.678 Nvme0n1 : 9.00 23438.56 91.56 0.00 0.00 0.00 0.00 0.00 00:38:57.678 [2024-12-16T15:44:46.287Z] =================================================================================================================== 00:38:57.678 [2024-12-16T15:44:46.287Z] Total : 23438.56 91.56 0.00 0.00 0.00 0.00 0.00 00:38:57.678 00:38:58.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.615 Nvme0n1 : 10.00 23456.90 91.63 0.00 0.00 0.00 0.00 0.00 00:38:58.615 [2024-12-16T15:44:47.224Z] =================================================================================================================== 00:38:58.615 [2024-12-16T15:44:47.224Z] Total : 23456.90 91.63 0.00 0.00 0.00 0.00 0.00 00:38:58.615 00:38:58.615 00:38:58.615 Latency(us) 00:38:58.615 [2024-12-16T15:44:47.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:58.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.615 Nvme0n1 : 10.00 23462.18 91.65 0.00 0.00 5452.79 4743.56 26214.40 00:38:58.615 [2024-12-16T15:44:47.224Z] =================================================================================================================== 00:38:58.615 [2024-12-16T15:44:47.224Z] Total : 23462.18 91.65 0.00 0.00 5452.79 4743.56 26214.40 00:38:58.615 { 00:38:58.615 "results": [ 00:38:58.615 { 00:38:58.615 "job": "Nvme0n1", 00:38:58.615 "core_mask": "0x2", 00:38:58.615 "workload": "randwrite", 00:38:58.615 "status": "finished", 00:38:58.615 "queue_depth": 128, 00:38:58.615 "io_size": 4096, 00:38:58.615 "runtime": 10.003205, 00:38:58.615 "iops": 23462.180371191032, 00:38:58.615 "mibps": 91.64914207496497, 00:38:58.615 "io_failed": 0, 00:38:58.615 "io_timeout": 0, 00:38:58.615 "avg_latency_us": 5452.786232428965, 00:38:58.615 "min_latency_us": 4743.558095238095, 00:38:58.615 "max_latency_us": 26214.4 00:38:58.615 } 00:38:58.615 ], 00:38:58.615 "core_count": 1 00:38:58.615 } 00:38:58.615 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1244397 00:38:58.615 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1244397 ']' 00:38:58.615 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1244397 00:38:58.615 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:58.615 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.615 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244397 00:38:58.615 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:58.615 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:58.615 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244397' 00:38:58.615 killing process with pid 1244397 00:38:58.615 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1244397 00:38:58.615 Received shutdown signal, test time was about 10.000000 seconds 00:38:58.615 00:38:58.615 Latency(us) 00:38:58.615 [2024-12-16T15:44:47.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:58.615 [2024-12-16T15:44:47.224Z] =================================================================================================================== 00:38:58.615 [2024-12-16T15:44:47.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:58.615 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1244397 00:38:58.615 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:58.874 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:59.133 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:38:59.133 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1241467 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1241467 00:38:59.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1241467 Killed "${NVMF_APP[@]}" "$@" 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1246193 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1246193 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1246193 ']' 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.393 16:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:59.393 [2024-12-16 16:44:47.867470] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.393 [2024-12-16 16:44:47.868402] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:59.393 [2024-12-16 16:44:47.868436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.393 [2024-12-16 16:44:47.948085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.393 [2024-12-16 16:44:47.969241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.393 [2024-12-16 16:44:47.969275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.393 [2024-12-16 16:44:47.969282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.393 [2024-12-16 16:44:47.969288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.393 [2024-12-16 16:44:47.969293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.393 [2024-12-16 16:44:47.969773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.652 [2024-12-16 16:44:48.033063] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.652 [2024-12-16 16:44:48.033284] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.652 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:59.911 [2024-12-16 16:44:48.271654] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:59.911 [2024-12-16 16:44:48.271912] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:59.911 [2024-12-16 16:44:48.272036] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:59.911 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 20d377e1-151a-4723-a24d-15dcaf3474a4 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=20d377e1-151a-4723-a24d-15dcaf3474a4 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:59.912 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 20d377e1-151a-4723-a24d-15dcaf3474a4 -t 2000 00:39:00.171 [ 00:39:00.171 { 00:39:00.171 "name": "20d377e1-151a-4723-a24d-15dcaf3474a4", 00:39:00.171 "aliases": [ 00:39:00.171 "lvs/lvol" 00:39:00.171 ], 00:39:00.171 "product_name": "Logical Volume", 00:39:00.171 "block_size": 4096, 00:39:00.171 "num_blocks": 38912, 00:39:00.171 "uuid": "20d377e1-151a-4723-a24d-15dcaf3474a4", 00:39:00.171 "assigned_rate_limits": { 00:39:00.171 "rw_ios_per_sec": 0, 00:39:00.171 "rw_mbytes_per_sec": 0, 00:39:00.171 "r_mbytes_per_sec": 0, 00:39:00.171 "w_mbytes_per_sec": 0 00:39:00.171 }, 00:39:00.171 "claimed": false, 00:39:00.171 "zoned": false, 00:39:00.171 "supported_io_types": { 00:39:00.171 "read": true, 00:39:00.171 "write": true, 00:39:00.171 "unmap": true, 00:39:00.171 "flush": false, 00:39:00.171 "reset": true, 00:39:00.171 "nvme_admin": false, 00:39:00.171 "nvme_io": false, 00:39:00.171 "nvme_io_md": false, 00:39:00.171 "write_zeroes": true, 00:39:00.171 "zcopy": false, 00:39:00.171 "get_zone_info": false, 00:39:00.171 "zone_management": false, 00:39:00.171 "zone_append": false, 00:39:00.171 "compare": false, 00:39:00.171 "compare_and_write": false, 00:39:00.171 "abort": false, 00:39:00.171 "seek_hole": true, 00:39:00.171 "seek_data": true, 00:39:00.171 "copy": false, 00:39:00.171 "nvme_iov_md": false 00:39:00.171 }, 00:39:00.171 "driver_specific": { 00:39:00.171 "lvol": { 00:39:00.171 "lvol_store_uuid": "c0b8e3bb-3b20-4558-866e-c6a755a48b7d", 00:39:00.171 "base_bdev": "aio_bdev", 00:39:00.171 "thin_provision": false, 00:39:00.171 "num_allocated_clusters": 38, 00:39:00.171 "snapshot": false, 00:39:00.171 "clone": false, 00:39:00.171 "esnap_clone": false 00:39:00.171 } 00:39:00.171 } 00:39:00.171 } 00:39:00.171 ] 00:39:00.171 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:00.171 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:00.171 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:00.430 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:00.430 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:00.430 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:00.690 [2024-12-16 16:44:49.222253] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:00.690 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:00.949 request: 00:39:00.949 { 00:39:00.949 "uuid": "c0b8e3bb-3b20-4558-866e-c6a755a48b7d", 00:39:00.949 "method": "bdev_lvol_get_lvstores", 00:39:00.949 "req_id": 1 00:39:00.949 } 00:39:00.949 Got JSON-RPC error response 00:39:00.949 response: 00:39:00.949 { 00:39:00.949 "code": -19, 00:39:00.949 "message": "No such device" 00:39:00.949 } 00:39:00.949 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:00.949 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:00.949 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:00.949 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:00.949 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:01.208 aio_bdev 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 20d377e1-151a-4723-a24d-15dcaf3474a4 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=20d377e1-151a-4723-a24d-15dcaf3474a4 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:01.208 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:01.468 16:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 20d377e1-151a-4723-a24d-15dcaf3474a4 -t 2000 00:39:01.468 [ 00:39:01.468 { 00:39:01.468 "name": "20d377e1-151a-4723-a24d-15dcaf3474a4", 00:39:01.468 "aliases": [ 00:39:01.468 "lvs/lvol" 00:39:01.468 ], 00:39:01.468 "product_name": "Logical Volume", 00:39:01.468 "block_size": 4096, 00:39:01.468 "num_blocks": 38912, 00:39:01.468 "uuid": "20d377e1-151a-4723-a24d-15dcaf3474a4", 00:39:01.468 "assigned_rate_limits": { 00:39:01.468 "rw_ios_per_sec": 0, 00:39:01.468 "rw_mbytes_per_sec": 0, 00:39:01.468 "r_mbytes_per_sec": 0, 00:39:01.468 "w_mbytes_per_sec": 0 00:39:01.468 }, 00:39:01.468 "claimed": false, 00:39:01.468 "zoned": false, 00:39:01.468 "supported_io_types": { 00:39:01.468 "read": true, 00:39:01.468 "write": true, 00:39:01.468 "unmap": true, 00:39:01.468 "flush": false, 00:39:01.468 "reset": true, 00:39:01.468 "nvme_admin": false, 00:39:01.468 "nvme_io": false, 00:39:01.468 "nvme_io_md": false, 00:39:01.468 "write_zeroes": true, 00:39:01.468 "zcopy": false, 00:39:01.468 "get_zone_info": false, 00:39:01.468 "zone_management": false, 00:39:01.468 "zone_append": false, 00:39:01.468 "compare": false, 00:39:01.468 "compare_and_write": false, 00:39:01.468 "abort": false, 00:39:01.468 "seek_hole": true, 00:39:01.468 "seek_data": true, 00:39:01.468 "copy": false, 00:39:01.468 "nvme_iov_md": false 00:39:01.468 }, 00:39:01.468 "driver_specific": { 00:39:01.468 "lvol": { 00:39:01.468 "lvol_store_uuid": "c0b8e3bb-3b20-4558-866e-c6a755a48b7d", 00:39:01.468 "base_bdev": "aio_bdev", 00:39:01.468 "thin_provision": false, 00:39:01.468 "num_allocated_clusters": 38, 00:39:01.468 "snapshot": false, 00:39:01.468 "clone": false, 00:39:01.468 "esnap_clone": false 00:39:01.468 } 00:39:01.468 } 00:39:01.468 } 00:39:01.468 ] 00:39:01.468 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:01.468 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:01.468 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:01.727 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:01.727 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:01.727 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:01.986 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:01.986 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20d377e1-151a-4723-a24d-15dcaf3474a4 00:39:02.245 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0b8e3bb-3b20-4558-866e-c6a755a48b7d 00:39:02.245 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:02.504 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.504 00:39:02.504 real 0m16.976s 00:39:02.504 user 0m34.381s 00:39:02.504 sys 0m3.821s 00:39:02.504 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:02.504 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:02.504 ************************************ 00:39:02.504 END TEST lvs_grow_dirty 00:39:02.504 ************************************ 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:02.763 nvmf_trace.0 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:02.763 rmmod nvme_tcp 00:39:02.763 rmmod nvme_fabrics 00:39:02.763 rmmod nvme_keyring 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1246193 ']' 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1246193 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1246193 ']' 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1246193 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:02.763 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:02.764 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246193 00:39:02.764 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:02.764 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:02.764 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246193' 00:39:02.764 killing process with pid 1246193 00:39:02.764 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1246193 00:39:02.764 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1246193 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.023 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.928 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:04.928 00:39:04.928 real 0m41.820s 00:39:04.928 user 0m52.039s 00:39:04.928 sys 0m10.196s 00:39:04.928 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:04.928 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:04.928 ************************************ 00:39:04.928 END TEST nvmf_lvs_grow 00:39:04.928 ************************************ 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:05.238 ************************************ 00:39:05.238 START TEST nvmf_bdev_io_wait 00:39:05.238 ************************************ 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:05.238 * Looking for test storage... 00:39:05.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:05.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.238 --rc genhtml_branch_coverage=1 00:39:05.238 --rc genhtml_function_coverage=1 00:39:05.238 --rc genhtml_legend=1 00:39:05.238 --rc geninfo_all_blocks=1 00:39:05.238 --rc geninfo_unexecuted_blocks=1 00:39:05.238 00:39:05.238 ' 00:39:05.238 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:05.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.238 --rc genhtml_branch_coverage=1 00:39:05.238 --rc genhtml_function_coverage=1 00:39:05.238 --rc genhtml_legend=1 00:39:05.238 --rc geninfo_all_blocks=1 00:39:05.238 --rc geninfo_unexecuted_blocks=1 00:39:05.239 00:39:05.239 ' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.239 --rc genhtml_branch_coverage=1 00:39:05.239 --rc genhtml_function_coverage=1 00:39:05.239 --rc genhtml_legend=1 00:39:05.239 --rc geninfo_all_blocks=1 00:39:05.239 --rc geninfo_unexecuted_blocks=1 00:39:05.239 00:39:05.239 ' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.239 --rc genhtml_branch_coverage=1 00:39:05.239 --rc genhtml_function_coverage=1 00:39:05.239 --rc genhtml_legend=1 00:39:05.239 --rc geninfo_all_blocks=1 00:39:05.239 --rc geninfo_unexecuted_blocks=1 00:39:05.239 00:39:05.239 ' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.239 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.579 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:05.579 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:05.579 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:05.579 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:10.911 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:10.911 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:10.911 Found net devices under 0000:af:00.0: cvl_0_0 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:10.911 Found net devices under 0000:af:00.1: cvl_0_1 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.911 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:39:11.171 00:39:11.171 --- 10.0.0.2 ping statistics --- 00:39:11.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.171 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:39:11.171 00:39:11.171 --- 10.0.0.1 ping statistics --- 00:39:11.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.171 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1250297 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1250297 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1250297 ']' 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.171 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.430 [2024-12-16 16:44:59.800841] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:11.430 [2024-12-16 16:44:59.801836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:11.430 [2024-12-16 16:44:59.801872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.430 [2024-12-16 16:44:59.880687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:11.430 [2024-12-16 16:44:59.904886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.430 [2024-12-16 16:44:59.904923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.430 [2024-12-16 16:44:59.904929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.430 [2024-12-16 16:44:59.904935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.430 [2024-12-16 16:44:59.904940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.430 [2024-12-16 16:44:59.906413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.430 [2024-12-16 16:44:59.906526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.430 [2024-12-16 16:44:59.906634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.430 [2024-12-16 16:44:59.906634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:11.430 [2024-12-16 16:44:59.906897] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:11.430 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:11.430 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.430 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.430 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.430 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:11.430 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.430 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 [2024-12-16 16:45:00.070248] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:11.690 [2024-12-16 16:45:00.070999] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:11.690 [2024-12-16 16:45:00.071149] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:11.690 [2024-12-16 16:45:00.071264] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 [2024-12-16 16:45:00.083144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 Malloc0 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 [2024-12-16 16:45:00.155563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1250417 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1250419 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:11.690 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.691 { 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme$subsystem", 00:39:11.691 "trtype": "$TEST_TRANSPORT", 00:39:11.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "$NVMF_PORT", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.691 "hdgst": ${hdgst:-false}, 00:39:11.691 "ddgst": ${ddgst:-false} 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 } 00:39:11.691 EOF 00:39:11.691 )") 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1250421 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.691 { 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme$subsystem", 00:39:11.691 "trtype": "$TEST_TRANSPORT", 00:39:11.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "$NVMF_PORT", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.691 "hdgst": ${hdgst:-false}, 00:39:11.691 "ddgst": ${ddgst:-false} 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 } 00:39:11.691 EOF 00:39:11.691 )") 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1250424 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.691 { 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme$subsystem", 00:39:11.691 "trtype": "$TEST_TRANSPORT", 00:39:11.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "$NVMF_PORT", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.691 "hdgst": ${hdgst:-false}, 00:39:11.691 "ddgst": ${ddgst:-false} 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 } 00:39:11.691 EOF 00:39:11.691 )") 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.691 { 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme$subsystem", 00:39:11.691 "trtype": "$TEST_TRANSPORT", 00:39:11.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "$NVMF_PORT", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.691 "hdgst": ${hdgst:-false}, 00:39:11.691 "ddgst": ${ddgst:-false} 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 } 00:39:11.691 EOF 00:39:11.691 )") 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1250417 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme1", 00:39:11.691 "trtype": "tcp", 00:39:11.691 "traddr": "10.0.0.2", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "4420", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.691 "hdgst": false, 00:39:11.691 "ddgst": false 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 }' 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme1", 00:39:11.691 "trtype": "tcp", 00:39:11.691 "traddr": "10.0.0.2", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "4420", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.691 "hdgst": false, 00:39:11.691 "ddgst": false 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 }' 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme1", 00:39:11.691 "trtype": "tcp", 00:39:11.691 "traddr": "10.0.0.2", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "4420", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.691 "hdgst": false, 00:39:11.691 "ddgst": false 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 }' 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:11.691 16:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.691 "params": { 00:39:11.691 "name": "Nvme1", 00:39:11.691 "trtype": "tcp", 00:39:11.691 "traddr": "10.0.0.2", 00:39:11.691 "adrfam": "ipv4", 00:39:11.691 "trsvcid": "4420", 00:39:11.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.691 "hdgst": false, 00:39:11.691 "ddgst": false 00:39:11.691 }, 00:39:11.691 "method": "bdev_nvme_attach_controller" 00:39:11.691 }' 00:39:11.691 [2024-12-16 16:45:00.206508] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:11.691 [2024-12-16 16:45:00.206506] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:11.691 [2024-12-16 16:45:00.206560] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-12-16 16:45:00.206561] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:39:11.691 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:11.691 [2024-12-16 16:45:00.207231] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:11.691 [2024-12-16 16:45:00.207266] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:11.691 [2024-12-16 16:45:00.212198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:11.691 [2024-12-16 16:45:00.212246] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:11.950 [2024-12-16 16:45:00.397293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.950 [2024-12-16 16:45:00.414667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:11.950 [2024-12-16 16:45:00.488143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.950 [2024-12-16 16:45:00.505222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:12.209 [2024-12-16 16:45:00.589385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.209 [2024-12-16 16:45:00.606949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:12.209 [2024-12-16 16:45:00.651949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.209 [2024-12-16 16:45:00.667995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:12.209 Running I/O for 1 seconds... 00:39:12.209 Running I/O for 1 seconds... 00:39:12.467 Running I/O for 1 seconds... 00:39:12.467 Running I/O for 1 seconds... 00:39:13.404 8695.00 IOPS, 33.96 MiB/s 00:39:13.404 Latency(us) 00:39:13.404 [2024-12-16T15:45:02.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.404 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:13.404 Nvme1n1 : 1.01 8700.78 33.99 0.00 0.00 14562.73 1474.56 23218.47 00:39:13.404 [2024-12-16T15:45:02.013Z] =================================================================================================================== 00:39:13.404 [2024-12-16T15:45:02.013Z] Total : 8700.78 33.99 0.00 0.00 14562.73 1474.56 23218.47 00:39:13.404 11892.00 IOPS, 46.45 MiB/s 00:39:13.404 Latency(us) 00:39:13.404 [2024-12-16T15:45:02.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.404 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:13.404 Nvme1n1 : 1.01 11935.54 46.62 0.00 0.00 10683.41 4369.07 14730.00 00:39:13.404 [2024-12-16T15:45:02.013Z] =================================================================================================================== 00:39:13.404 [2024-12-16T15:45:02.013Z] Total : 11935.54 46.62 0.00 0.00 10683.41 4369.07 14730.00 00:39:13.404 16:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1250419 00:39:13.404 16:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1250421 00:39:13.404 9481.00 IOPS, 37.04 MiB/s 00:39:13.404 Latency(us) 00:39:13.404 [2024-12-16T15:45:02.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.404 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:13.404 Nvme1n1 : 1.01 9619.00 37.57 0.00 0.00 13282.64 2543.42 29709.65 00:39:13.404 [2024-12-16T15:45:02.013Z] =================================================================================================================== 00:39:13.404 [2024-12-16T15:45:02.013Z] Total : 9619.00 37.57 0.00 0.00 13282.64 2543.42 29709.65 00:39:13.404 242984.00 IOPS, 949.16 MiB/s 00:39:13.404 Latency(us) 00:39:13.404 [2024-12-16T15:45:02.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.404 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:13.404 Nvme1n1 : 1.00 242620.06 947.73 0.00 0.00 524.39 219.43 1490.16 00:39:13.404 [2024-12-16T15:45:02.013Z] =================================================================================================================== 00:39:13.404 [2024-12-16T15:45:02.013Z] Total : 242620.06 947.73 0.00 0.00 524.39 219.43 1490.16 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1250424 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.663 rmmod nvme_tcp 00:39:13.663 rmmod nvme_fabrics 00:39:13.663 rmmod nvme_keyring 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1250297 ']' 00:39:13.663 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1250297 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1250297 ']' 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1250297 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250297 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250297' 00:39:13.664 killing process with pid 1250297 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1250297 00:39:13.664 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1250297 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.922 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.923 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.923 16:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:15.828 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:15.828 00:39:15.828 real 0m10.843s 00:39:15.828 user 0m15.041s 00:39:15.828 sys 0m6.399s 00:39:15.828 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:15.828 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:15.828 ************************************ 00:39:15.828 END TEST nvmf_bdev_io_wait 00:39:15.828 ************************************ 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:16.088 ************************************ 00:39:16.088 START TEST nvmf_queue_depth 00:39:16.088 ************************************ 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:16.088 * Looking for test storage... 00:39:16.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:16.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.088 --rc genhtml_branch_coverage=1 00:39:16.088 --rc genhtml_function_coverage=1 00:39:16.088 --rc genhtml_legend=1 00:39:16.088 --rc geninfo_all_blocks=1 00:39:16.088 --rc geninfo_unexecuted_blocks=1 00:39:16.088 00:39:16.088 ' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:16.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.088 --rc genhtml_branch_coverage=1 00:39:16.088 --rc genhtml_function_coverage=1 00:39:16.088 --rc genhtml_legend=1 00:39:16.088 --rc geninfo_all_blocks=1 00:39:16.088 --rc geninfo_unexecuted_blocks=1 00:39:16.088 00:39:16.088 ' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:16.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.088 --rc genhtml_branch_coverage=1 00:39:16.088 --rc genhtml_function_coverage=1 00:39:16.088 --rc genhtml_legend=1 00:39:16.088 --rc geninfo_all_blocks=1 00:39:16.088 --rc geninfo_unexecuted_blocks=1 00:39:16.088 00:39:16.088 ' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:16.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.088 --rc genhtml_branch_coverage=1 00:39:16.088 --rc genhtml_function_coverage=1 00:39:16.088 --rc genhtml_legend=1 00:39:16.088 --rc geninfo_all_blocks=1 00:39:16.088 --rc geninfo_unexecuted_blocks=1 00:39:16.088 00:39:16.088 ' 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.088 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.348 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.349 16:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.923 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:22.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:22.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:22.924 Found net devices under 0000:af:00.0: cvl_0_0 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:22.924 Found net devices under 0000:af:00.1: cvl_0_1 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:22.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:22.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:39:22.924 00:39:22.924 --- 10.0.0.2 ping statistics --- 00:39:22.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.924 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:22.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:22.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:39:22.924 00:39:22.924 --- 10.0.0.1 ping statistics --- 00:39:22.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.924 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1254634 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1254634 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254634 ']' 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:22.924 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.924 [2024-12-16 16:45:10.686436] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:22.924 [2024-12-16 16:45:10.687377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:22.924 [2024-12-16 16:45:10.687409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.924 [2024-12-16 16:45:10.766345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.924 [2024-12-16 16:45:10.787257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.925 [2024-12-16 16:45:10.787290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.925 [2024-12-16 16:45:10.787297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.925 [2024-12-16 16:45:10.787303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.925 [2024-12-16 16:45:10.787308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.925 [2024-12-16 16:45:10.787783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.925 [2024-12-16 16:45:10.849067] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:22.925 [2024-12-16 16:45:10.849273] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 [2024-12-16 16:45:10.916492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 Malloc0 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 [2024-12-16 16:45:10.988538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1254653 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1254653 /var/tmp/bdevperf.sock 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254653 ']' 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:22.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:22.925 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 [2024-12-16 16:45:11.036746] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:22.925 [2024-12-16 16:45:11.036787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254653 ] 00:39:22.925 [2024-12-16 16:45:11.110293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.925 [2024-12-16 16:45:11.132699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.925 NVMe0n1 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.925 16:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:23.184 Running I/O for 10 seconds... 00:39:25.056 11836.00 IOPS, 46.23 MiB/s [2024-12-16T15:45:14.601Z] 12177.50 IOPS, 47.57 MiB/s [2024-12-16T15:45:15.977Z] 12124.33 IOPS, 47.36 MiB/s [2024-12-16T15:45:16.913Z] 12112.50 IOPS, 47.31 MiB/s [2024-12-16T15:45:17.851Z] 12281.40 IOPS, 47.97 MiB/s [2024-12-16T15:45:18.787Z] 12304.50 IOPS, 48.06 MiB/s [2024-12-16T15:45:19.723Z] 12382.29 IOPS, 48.37 MiB/s [2024-12-16T15:45:20.660Z] 12418.88 IOPS, 48.51 MiB/s [2024-12-16T15:45:22.037Z] 12412.00 IOPS, 48.48 MiB/s [2024-12-16T15:45:22.037Z] 12432.80 IOPS, 48.57 MiB/s 00:39:33.428 Latency(us) 00:39:33.428 [2024-12-16T15:45:22.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:33.428 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:33.428 Verification LBA range: start 0x0 length 0x4000 00:39:33.428 NVMe0n1 : 10.06 12463.42 48.69 0.00 0.00 81864.02 14792.41 51929.48 00:39:33.428 [2024-12-16T15:45:22.037Z] =================================================================================================================== 00:39:33.428 [2024-12-16T15:45:22.037Z] Total : 12463.42 48.69 0.00 0.00 81864.02 14792.41 51929.48 00:39:33.428 { 00:39:33.428 "results": [ 00:39:33.428 { 00:39:33.428 "job": "NVMe0n1", 00:39:33.428 "core_mask": "0x1", 00:39:33.428 "workload": "verify", 00:39:33.428 "status": "finished", 00:39:33.428 "verify_range": { 00:39:33.428 "start": 0, 00:39:33.428 "length": 16384 00:39:33.428 }, 00:39:33.428 "queue_depth": 1024, 00:39:33.428 "io_size": 4096, 00:39:33.428 "runtime": 10.056789, 00:39:33.428 "iops": 12463.421475781186, 00:39:33.428 "mibps": 48.68524013977026, 00:39:33.428 "io_failed": 0, 00:39:33.428 "io_timeout": 0, 00:39:33.428 "avg_latency_us": 81864.01989236307, 00:39:33.428 "min_latency_us": 14792.411428571428, 00:39:33.428 "max_latency_us": 51929.4780952381 00:39:33.428 } 00:39:33.428 ], 00:39:33.428 "core_count": 1 00:39:33.428 } 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1254653 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254653 ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254653 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254653 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254653' 00:39:33.428 killing process with pid 1254653 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254653 00:39:33.428 Received shutdown signal, test time was about 10.000000 seconds 00:39:33.428 00:39:33.428 Latency(us) 00:39:33.428 [2024-12-16T15:45:22.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:33.428 [2024-12-16T15:45:22.037Z] =================================================================================================================== 00:39:33.428 [2024-12-16T15:45:22.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254653 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.428 rmmod nvme_tcp 00:39:33.428 rmmod nvme_fabrics 00:39:33.428 rmmod nvme_keyring 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1254634 ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1254634 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254634 ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254634 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.428 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254634 00:39:33.428 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:33.428 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:33.429 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254634' 00:39:33.429 killing process with pid 1254634 00:39:33.429 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254634 00:39:33.429 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254634 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.688 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.225 00:39:36.225 real 0m19.759s 00:39:36.225 user 0m22.795s 00:39:36.225 sys 0m6.300s 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:36.225 ************************************ 00:39:36.225 END TEST nvmf_queue_depth 00:39:36.225 ************************************ 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:36.225 ************************************ 00:39:36.225 START TEST nvmf_target_multipath 00:39:36.225 ************************************ 00:39:36.225 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:36.225 * Looking for test storage... 00:39:36.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:36.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.226 --rc genhtml_branch_coverage=1 00:39:36.226 --rc genhtml_function_coverage=1 00:39:36.226 --rc genhtml_legend=1 00:39:36.226 --rc geninfo_all_blocks=1 00:39:36.226 --rc geninfo_unexecuted_blocks=1 00:39:36.226 00:39:36.226 ' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:36.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.226 --rc genhtml_branch_coverage=1 00:39:36.226 --rc genhtml_function_coverage=1 00:39:36.226 --rc genhtml_legend=1 00:39:36.226 --rc geninfo_all_blocks=1 00:39:36.226 --rc geninfo_unexecuted_blocks=1 00:39:36.226 00:39:36.226 ' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:36.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.226 --rc genhtml_branch_coverage=1 00:39:36.226 --rc genhtml_function_coverage=1 00:39:36.226 --rc genhtml_legend=1 00:39:36.226 --rc geninfo_all_blocks=1 00:39:36.226 --rc geninfo_unexecuted_blocks=1 00:39:36.226 00:39:36.226 ' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:36.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.226 --rc genhtml_branch_coverage=1 00:39:36.226 --rc genhtml_function_coverage=1 00:39:36.226 --rc genhtml_legend=1 00:39:36.226 --rc geninfo_all_blocks=1 00:39:36.226 --rc geninfo_unexecuted_blocks=1 00:39:36.226 00:39:36.226 ' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.226 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:36.227 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:42.798 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:42.798 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:42.798 Found net devices under 0000:af:00.0: cvl_0_0 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:42.798 Found net devices under 0000:af:00.1: cvl_0_1 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:42.798 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:42.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:42.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:39:42.799 00:39:42.799 --- 10.0.0.2 ping statistics --- 00:39:42.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.799 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:42.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:42.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:39:42.799 00:39:42.799 --- 10.0.0.1 ping statistics --- 00:39:42.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.799 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:42.799 only one NIC for nvmf test 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:42.799 rmmod nvme_tcp 00:39:42.799 rmmod nvme_fabrics 00:39:42.799 rmmod nvme_keyring 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.799 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.177 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:44.178 00:39:44.178 real 0m8.270s 00:39:44.178 user 0m1.848s 00:39:44.178 sys 0m4.410s 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:44.178 ************************************ 00:39:44.178 END TEST nvmf_target_multipath 00:39:44.178 ************************************ 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:44.178 ************************************ 00:39:44.178 START TEST nvmf_zcopy 00:39:44.178 ************************************ 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:44.178 * Looking for test storage... 00:39:44.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:44.178 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:44.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.438 --rc genhtml_branch_coverage=1 00:39:44.438 --rc genhtml_function_coverage=1 00:39:44.438 --rc genhtml_legend=1 00:39:44.438 --rc geninfo_all_blocks=1 00:39:44.438 --rc geninfo_unexecuted_blocks=1 00:39:44.438 00:39:44.438 ' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:44.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.438 --rc genhtml_branch_coverage=1 00:39:44.438 --rc genhtml_function_coverage=1 00:39:44.438 --rc genhtml_legend=1 00:39:44.438 --rc geninfo_all_blocks=1 00:39:44.438 --rc geninfo_unexecuted_blocks=1 00:39:44.438 00:39:44.438 ' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:44.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.438 --rc genhtml_branch_coverage=1 00:39:44.438 --rc genhtml_function_coverage=1 00:39:44.438 --rc genhtml_legend=1 00:39:44.438 --rc geninfo_all_blocks=1 00:39:44.438 --rc geninfo_unexecuted_blocks=1 00:39:44.438 00:39:44.438 ' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:44.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.438 --rc genhtml_branch_coverage=1 00:39:44.438 --rc genhtml_function_coverage=1 00:39:44.438 --rc genhtml_legend=1 00:39:44.438 --rc geninfo_all_blocks=1 00:39:44.438 --rc geninfo_unexecuted_blocks=1 00:39:44.438 00:39:44.438 ' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:44.438 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:44.439 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:51.007 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:51.008 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:51.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:51.008 Found net devices under 0000:af:00.0: cvl_0_0 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:51.008 Found net devices under 0000:af:00.1: cvl_0_1 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:51.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:51.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:39:51.008 00:39:51.008 --- 10.0.0.2 ping statistics --- 00:39:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.008 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:51.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:51.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:39:51.008 00:39:51.008 --- 10.0.0.1 ping statistics --- 00:39:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.008 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1263212 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1263212 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1263212 ']' 00:39:51.008 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.009 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.009 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.009 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.009 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 [2024-12-16 16:45:38.872319] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:51.009 [2024-12-16 16:45:38.873230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:51.009 [2024-12-16 16:45:38.873264] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:51.009 [2024-12-16 16:45:38.950687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.009 [2024-12-16 16:45:38.971532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:51.009 [2024-12-16 16:45:38.971566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:51.009 [2024-12-16 16:45:38.971574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:51.009 [2024-12-16 16:45:38.971580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:51.009 [2024-12-16 16:45:38.971585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:51.009 [2024-12-16 16:45:38.972067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.009 [2024-12-16 16:45:39.034080] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:51.009 [2024-12-16 16:45:39.034302] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 [2024-12-16 16:45:39.104736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 [2024-12-16 16:45:39.132944] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 malloc0 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:51.009 { 00:39:51.009 "params": { 00:39:51.009 "name": "Nvme$subsystem", 00:39:51.009 "trtype": "$TEST_TRANSPORT", 00:39:51.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.009 "adrfam": "ipv4", 00:39:51.009 "trsvcid": "$NVMF_PORT", 00:39:51.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.009 "hdgst": ${hdgst:-false}, 00:39:51.009 "ddgst": ${ddgst:-false} 00:39:51.009 }, 00:39:51.009 "method": "bdev_nvme_attach_controller" 00:39:51.009 } 00:39:51.009 EOF 00:39:51.009 )") 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:51.009 16:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:51.009 "params": { 00:39:51.009 "name": "Nvme1", 00:39:51.009 "trtype": "tcp", 00:39:51.009 "traddr": "10.0.0.2", 00:39:51.009 "adrfam": "ipv4", 00:39:51.009 "trsvcid": "4420", 00:39:51.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:51.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:51.009 "hdgst": false, 00:39:51.009 "ddgst": false 00:39:51.009 }, 00:39:51.009 "method": "bdev_nvme_attach_controller" 00:39:51.009 }' 00:39:51.009 [2024-12-16 16:45:39.229020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:51.009 [2024-12-16 16:45:39.229074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263371 ] 00:39:51.009 [2024-12-16 16:45:39.303942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.009 [2024-12-16 16:45:39.326453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.009 Running I/O for 10 seconds... 00:39:52.883 8466.00 IOPS, 66.14 MiB/s [2024-12-16T15:45:42.871Z] 8561.00 IOPS, 66.88 MiB/s [2024-12-16T15:45:43.806Z] 8613.33 IOPS, 67.29 MiB/s [2024-12-16T15:45:44.743Z] 8646.00 IOPS, 67.55 MiB/s [2024-12-16T15:45:45.684Z] 8675.00 IOPS, 67.77 MiB/s [2024-12-16T15:45:46.616Z] 8682.50 IOPS, 67.83 MiB/s [2024-12-16T15:45:47.551Z] 8693.14 IOPS, 67.92 MiB/s [2024-12-16T15:45:48.926Z] 8699.75 IOPS, 67.97 MiB/s [2024-12-16T15:45:49.862Z] 8703.44 IOPS, 68.00 MiB/s [2024-12-16T15:45:49.862Z] 8707.10 IOPS, 68.02 MiB/s 00:40:01.253 Latency(us) 00:40:01.253 [2024-12-16T15:45:49.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:01.253 Verification LBA range: start 0x0 length 0x1000 00:40:01.253 Nvme1n1 : 10.01 8709.05 68.04 0.00 0.00 14654.86 2090.91 21221.18 00:40:01.253 [2024-12-16T15:45:49.862Z] =================================================================================================================== 00:40:01.253 [2024-12-16T15:45:49.862Z] Total : 8709.05 68.04 0.00 0.00 14654.86 2090.91 21221.18 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1264930 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:01.253 { 00:40:01.253 "params": { 00:40:01.253 "name": "Nvme$subsystem", 00:40:01.253 "trtype": "$TEST_TRANSPORT", 00:40:01.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:01.253 "adrfam": "ipv4", 00:40:01.253 "trsvcid": "$NVMF_PORT", 00:40:01.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:01.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:01.253 "hdgst": ${hdgst:-false}, 00:40:01.253 "ddgst": ${ddgst:-false} 00:40:01.253 }, 00:40:01.253 "method": "bdev_nvme_attach_controller" 00:40:01.253 } 00:40:01.253 EOF 00:40:01.253 )") 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:01.253 [2024-12-16 16:45:49.672412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.253 [2024-12-16 16:45:49.672442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:01.253 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:01.253 "params": { 00:40:01.253 "name": "Nvme1", 00:40:01.253 "trtype": "tcp", 00:40:01.253 "traddr": "10.0.0.2", 00:40:01.253 "adrfam": "ipv4", 00:40:01.253 "trsvcid": "4420", 00:40:01.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:01.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:01.253 "hdgst": false, 00:40:01.253 "ddgst": false 00:40:01.253 }, 00:40:01.253 "method": "bdev_nvme_attach_controller" 00:40:01.253 }' 00:40:01.253 [2024-12-16 16:45:49.684380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.253 [2024-12-16 16:45:49.684393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.253 [2024-12-16 16:45:49.696375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.253 [2024-12-16 16:45:49.696385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.708374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.708385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.708882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:01.254 [2024-12-16 16:45:49.708923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264930 ] 00:40:01.254 [2024-12-16 16:45:49.720374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.720387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.732375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.732386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.744377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.744387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.756375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.756384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.768385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.768395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.780385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.780394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.781729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.254 [2024-12-16 16:45:49.792377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.792392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.804260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.254 [2024-12-16 16:45:49.804375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.804387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.816393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.816409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.828394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.828413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.840381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.840394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.254 [2024-12-16 16:45:49.852377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.254 [2024-12-16 16:45:49.852389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.864380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.864394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.876377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.876388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.888389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.888411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.900380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.900393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.912379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.912394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.924377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.924388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.936375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.936385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.948375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.948384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.960417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.960432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.972377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.972389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.984374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.984383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:49.996374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:49.996384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.008403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.008430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.020387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.020406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.032379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.032393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.044376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.044387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.056384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.056401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.068376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.068386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.080376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.080386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.513 [2024-12-16 16:45:50.092464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.513 [2024-12-16 16:45:50.092501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.143385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.143404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.152378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.152391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 Running I/O for 5 seconds... 00:40:01.773 [2024-12-16 16:45:50.166251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.166270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.181032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.181052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.196237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.196256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.210405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.210435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.225571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.225591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.239977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.239997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.251977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.251996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.266118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.266137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.280804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.280824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.296650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.296668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.312392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.312411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.325390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.325409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.340090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.340116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.352913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.352931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:01.773 [2024-12-16 16:45:50.366631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:01.773 [2024-12-16 16:45:50.366651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.031 [2024-12-16 16:45:50.381341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.381363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.396017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.396039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.409502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.409521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.420644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.420664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.434229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.434249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.449380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.449398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.463980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.464001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.478275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.478294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.492991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.493010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.507790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.507809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.522039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.522065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.536481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.536501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.549159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.549178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.561900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.561922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.576555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.576574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.589030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.589049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.604297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.604318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.616067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.616086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.032 [2024-12-16 16:45:50.629909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.032 [2024-12-16 16:45:50.629929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.644532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.644552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.656919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.656938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.671803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.671822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.686110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.686129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.700685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.700703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.716044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.716064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.729806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.729826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.744051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.744070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.757738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.757758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.772497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.290 [2024-12-16 16:45:50.772516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.290 [2024-12-16 16:45:50.785996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.786015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.800413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.800432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.813102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.813121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.828812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.828831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.844408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.844428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.855349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.855368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.869691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.869709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.291 [2024-12-16 16:45:50.884831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.291 [2024-12-16 16:45:50.884849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.900356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.900376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.913028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.913047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.926225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.926244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.940668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.940686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.956091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.956114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.969901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.969919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:50.984681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:50.984701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:51.000009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:51.000028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:51.014229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:51.014249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:51.028908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:51.028927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:51.044775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.552 [2024-12-16 16:45:51.044793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.552 [2024-12-16 16:45:51.060206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.060224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.073941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.073960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.088250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.088270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.101736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.101755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.115887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.115906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.130291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.130320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.144544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.144563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.553 [2024-12-16 16:45:51.155417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.553 [2024-12-16 16:45:51.155436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.861 16920.00 IOPS, 132.19 MiB/s [2024-12-16T15:45:51.470Z] [2024-12-16 16:45:51.170366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.861 [2024-12-16 16:45:51.170385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.861 [2024-12-16 16:45:51.184719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.861 [2024-12-16 16:45:51.184738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.861 [2024-12-16 16:45:51.200212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.861 [2024-12-16 16:45:51.200231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.861 [2024-12-16 16:45:51.214406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.861 [2024-12-16 16:45:51.214426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.861 [2024-12-16 16:45:51.229128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.229148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.244710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.244729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.258657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.258675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.273967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.273986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.288693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.288712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.304182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.304201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.317308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.317327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.332382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.332401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.344947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.344965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.358081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.358111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.372885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.372903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.388398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.388418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.401011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.401030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.414007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.414027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.428725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.428745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.441468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.441486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:02.862 [2024-12-16 16:45:51.456023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:02.862 [2024-12-16 16:45:51.456042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.469630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.469650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.480963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.480981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.493975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.493993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.509009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.509027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.524019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.524038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.537982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.538001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.552909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.552928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.567923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.567943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.581559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.581578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.596084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.596108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.608975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.608994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.624441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.624466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.637176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.637195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.652278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.652298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.665248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.665267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.680283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.680302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.694033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.694051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.708727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.708746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.724413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.724432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.737896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.737914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.196 [2024-12-16 16:45:51.752392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.196 [2024-12-16 16:45:51.752411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.197 [2024-12-16 16:45:51.762922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.197 [2024-12-16 16:45:51.762941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.197 [2024-12-16 16:45:51.777251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.197 [2024-12-16 16:45:51.777270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.197 [2024-12-16 16:45:51.792610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.197 [2024-12-16 16:45:51.792628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.197 [2024-12-16 16:45:51.803276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.197 [2024-12-16 16:45:51.803296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.818025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.818046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.832578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.832598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.842706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.842725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.857024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.857044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.869240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.869258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.884240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.884265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.898518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.898538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.913139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.913160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.928329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.928349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.938912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.938931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.953407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.953425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.968425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.968445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.979447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.979466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:51.994591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:51.994610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:52.008989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:52.009008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:52.023992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:52.024012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:52.037602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:52.037622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:52.048667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:52.048685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.456 [2024-12-16 16:45:52.062009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.456 [2024-12-16 16:45:52.062028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.715 [2024-12-16 16:45:52.076482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.076502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.090565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.090584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.104954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.104973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.121180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.121198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.136282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.136301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.147429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.147447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.162421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.162440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 16907.00 IOPS, 132.09 MiB/s [2024-12-16T15:45:52.325Z] [2024-12-16 16:45:52.176540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.176559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.189294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.189312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.204145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.204164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.218112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.218146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.232722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.232740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.248344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.248363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.262032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.262052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.276444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.276463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.288986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.289005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.304276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.304296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.716 [2024-12-16 16:45:52.318295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.716 [2024-12-16 16:45:52.318315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.333494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.333514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.348494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.348513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.362335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.362353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.376978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.376996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.392831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.392850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.408322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.408341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.421014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.421033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.434232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.434254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.448959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.448977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.463876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.463897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.478049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.478069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.492407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.492427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.503382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.503401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.517996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.518015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.533000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.533019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.547312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.547331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.562283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.562301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:03.975 [2024-12-16 16:45:52.576442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:03.975 [2024-12-16 16:45:52.576460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.234 [2024-12-16 16:45:52.587849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.587867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.602364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.602383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.617159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.617177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.632716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.632738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.645467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.645487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.660540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.660559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.672030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.672049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.686204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.686224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.701255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.701273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.716026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.716045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.729014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.729032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.744765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.744784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.759626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.759645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.774405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.774424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.789033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.789051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.804290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.804309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.818109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.818128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.235 [2024-12-16 16:45:52.833059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.235 [2024-12-16 16:45:52.833079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.848228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.848248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.861854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.861872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.876402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.876420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.888011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.888030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.901722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.901741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.912818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.912835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.926037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.926056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.940308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.940333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.950973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.950993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.965175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.965193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.980284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.980303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:52.994562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:52.994581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.009062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.009080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.023983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.024002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.038388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.038407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.053763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.053784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.068403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.068423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.080978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.080999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.494 [2024-12-16 16:45:53.093820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.494 [2024-12-16 16:45:53.093841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.108947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.108967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.124162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.124181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.137893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.137912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.152782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.152801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 16893.67 IOPS, 131.98 MiB/s [2024-12-16T15:45:53.362Z] [2024-12-16 16:45:53.168531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.168551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.181302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.181321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.196160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.196180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.209143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.209167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.224811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.224830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.240204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.240224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.254732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.254752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.269316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.269338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.283991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.284011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.297543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.297564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.313227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.313246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.328500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.328519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.341271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.341291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:04.753 [2024-12-16 16:45:53.356519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:04.753 [2024-12-16 16:45:53.356538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.370457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.370477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.384973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.384994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.400984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.401003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.416134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.416153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.429336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.429356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.444602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.444621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.456009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.456029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.470489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.470509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.485188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.485211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.500070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.500089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.513771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.513790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.528724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.528742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.543805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.543824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.557821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.557839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.572746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.572764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.587788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.587806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.602725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.602744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.012 [2024-12-16 16:45:53.617203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.012 [2024-12-16 16:45:53.617222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.632667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.632686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.648182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.648201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.660890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.660908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.676398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.676418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.689688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.689707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.704289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.704308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.715698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.715717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.730073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.730092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.271 [2024-12-16 16:45:53.744432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.271 [2024-12-16 16:45:53.744451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.755376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.755394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.769876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.769895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.784400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.784419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.795497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.795515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.809806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.809825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.823870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.823889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.837911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.837930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.852487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.852506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.863572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.863591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.272 [2024-12-16 16:45:53.878119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.272 [2024-12-16 16:45:53.878138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.892961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.892980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.908541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.908560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.919411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.919430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.934435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.934454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.948817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.948836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.964588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.964607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.977131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.977149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:53.989971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:53.989989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.005160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.005179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.020313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.020332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.033608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.033627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.048162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.048180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.060257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.060276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.073758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.073777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.088473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.088492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.102035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.102053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.116537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.116555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.531 [2024-12-16 16:45:54.127543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.531 [2024-12-16 16:45:54.127562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.142348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.142368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.156646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.156665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.166900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.166918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 16883.25 IOPS, 131.90 MiB/s [2024-12-16T15:45:54.400Z] [2024-12-16 16:45:54.181498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.181517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.196039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.196059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.208686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.208704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.221809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.221827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.236552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.236571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.247635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.247654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.261945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.261968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.275758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.275777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.289958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.289976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.304872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.304890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.320233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.320252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.333863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.333881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.348274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.348294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.359786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.359805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.373919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.373938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.791 [2024-12-16 16:45:54.388378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:05.791 [2024-12-16 16:45:54.388397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.399975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.399995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.414122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.414141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.428735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.428754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.444223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.444243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.458414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.458433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.473086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.473112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.488203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.488224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.502467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.502487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.517291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.517310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.531761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.531787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.546115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.546135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.560732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.050 [2024-12-16 16:45:54.560752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.050 [2024-12-16 16:45:54.576015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.051 [2024-12-16 16:45:54.576035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.051 [2024-12-16 16:45:54.589419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.051 [2024-12-16 16:45:54.589438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.051 [2024-12-16 16:45:54.604344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.051 [2024-12-16 16:45:54.604363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.051 [2024-12-16 16:45:54.617849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.051 [2024-12-16 16:45:54.617868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.051 [2024-12-16 16:45:54.632500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.051 [2024-12-16 16:45:54.632519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.051 [2024-12-16 16:45:54.645932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.051 [2024-12-16 16:45:54.645951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.660319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.660339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.674352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.674372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.688840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.688859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.704615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.704634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.716609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.716630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.730531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.730552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.744868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.744887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.760279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.760299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.773174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.773194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.788078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.788103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.802247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.802271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.816408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.309 [2024-12-16 16:45:54.816427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.309 [2024-12-16 16:45:54.829068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.829087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.310 [2024-12-16 16:45:54.842244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.842263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.310 [2024-12-16 16:45:54.856746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.856764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.310 [2024-12-16 16:45:54.869283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.869303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.310 [2024-12-16 16:45:54.881677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.881696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.310 [2024-12-16 16:45:54.892783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.892802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.310 [2024-12-16 16:45:54.908611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.310 [2024-12-16 16:45:54.908630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:54.921549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:54.921568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:54.932712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:54.932731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:54.946018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:54.946037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:54.960901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:54.960919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:54.976148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:54.976167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:54.989034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:54.989052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.004218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.004237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.017172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.017191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.032483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.032502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.045116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.045134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.058134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.058160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.072985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.073003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.088014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.088033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.101993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.102016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.116985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.117003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.132024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.132043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.146326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.146344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 [2024-12-16 16:45:55.160902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.160920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.569 16903.00 IOPS, 132.05 MiB/s [2024-12-16T15:45:55.178Z] [2024-12-16 16:45:55.175240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.569 [2024-12-16 16:45:55.175259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 00:40:06.827 Latency(us) 00:40:06.827 [2024-12-16T15:45:55.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.827 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:06.827 Nvme1n1 : 5.01 16904.87 132.07 0.00 0.00 7563.98 1981.68 12795.12 00:40:06.827 [2024-12-16T15:45:55.436Z] =================================================================================================================== 00:40:06.827 [2024-12-16T15:45:55.436Z] Total : 16904.87 132.07 0.00 0.00 7563.98 1981.68 12795.12 00:40:06.827 [2024-12-16 16:45:55.184381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.184397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.196378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.196392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.208389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.208410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.220380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.220397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.232383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.232401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.244377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.244391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.256379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.256393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.268389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.268403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.280388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.280401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.292385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.292395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.304378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.304390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.316375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.316386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 [2024-12-16 16:45:55.328378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.827 [2024-12-16 16:45:55.328389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1264930) - No such process 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1264930 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:06.827 delay0 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.827 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:07.086 [2024-12-16 16:45:55.514238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:13.652 Initializing NVMe Controllers 00:40:13.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:13.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:13.652 Initialization complete. Launching workers. 00:40:13.652 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 186 00:40:13.652 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 457, failed to submit 49 00:40:13.652 success 318, unsuccessful 139, failed 0 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:13.652 rmmod nvme_tcp 00:40:13.652 rmmod nvme_fabrics 00:40:13.652 rmmod nvme_keyring 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1263212 ']' 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1263212 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1263212 ']' 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1263212 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263212 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263212' 00:40:13.652 killing process with pid 1263212 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1263212 00:40:13.652 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1263212 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.652 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.556 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:15.556 00:40:15.556 real 0m31.395s 00:40:15.557 user 0m40.794s 00:40:15.557 sys 0m12.072s 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:15.557 ************************************ 00:40:15.557 END TEST nvmf_zcopy 00:40:15.557 ************************************ 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:15.557 ************************************ 00:40:15.557 START TEST nvmf_nmic 00:40:15.557 ************************************ 00:40:15.557 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:15.816 * Looking for test storage... 00:40:15.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:15.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.816 --rc genhtml_branch_coverage=1 00:40:15.816 --rc genhtml_function_coverage=1 00:40:15.816 --rc genhtml_legend=1 00:40:15.816 --rc geninfo_all_blocks=1 00:40:15.816 --rc geninfo_unexecuted_blocks=1 00:40:15.816 00:40:15.816 ' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:15.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.816 --rc genhtml_branch_coverage=1 00:40:15.816 --rc genhtml_function_coverage=1 00:40:15.816 --rc genhtml_legend=1 00:40:15.816 --rc geninfo_all_blocks=1 00:40:15.816 --rc geninfo_unexecuted_blocks=1 00:40:15.816 00:40:15.816 ' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:15.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.816 --rc genhtml_branch_coverage=1 00:40:15.816 --rc genhtml_function_coverage=1 00:40:15.816 --rc genhtml_legend=1 00:40:15.816 --rc geninfo_all_blocks=1 00:40:15.816 --rc geninfo_unexecuted_blocks=1 00:40:15.816 00:40:15.816 ' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:15.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.816 --rc genhtml_branch_coverage=1 00:40:15.816 --rc genhtml_function_coverage=1 00:40:15.816 --rc genhtml_legend=1 00:40:15.816 --rc geninfo_all_blocks=1 00:40:15.816 --rc geninfo_unexecuted_blocks=1 00:40:15.816 00:40:15.816 ' 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.816 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:15.817 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:22.385 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:22.386 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:22.386 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:22.386 Found net devices under 0000:af:00.0: cvl_0_0 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:22.386 Found net devices under 0000:af:00.1: cvl_0_1 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:22.386 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:22.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:40:22.386 00:40:22.386 --- 10.0.0.2 ping statistics --- 00:40:22.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.386 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:22.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:40:22.386 00:40:22.386 --- 10.0.0.1 ping statistics --- 00:40:22.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.386 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:22.386 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1270183 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1270183 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1270183 ']' 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 [2024-12-16 16:46:10.311336] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:22.387 [2024-12-16 16:46:10.312318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:22.387 [2024-12-16 16:46:10.312354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.387 [2024-12-16 16:46:10.393448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:22.387 [2024-12-16 16:46:10.417434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.387 [2024-12-16 16:46:10.417473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.387 [2024-12-16 16:46:10.417480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.387 [2024-12-16 16:46:10.417486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.387 [2024-12-16 16:46:10.417491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.387 [2024-12-16 16:46:10.418956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:22.387 [2024-12-16 16:46:10.419063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:22.387 [2024-12-16 16:46:10.419170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.387 [2024-12-16 16:46:10.419170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:22.387 [2024-12-16 16:46:10.482379] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:22.387 [2024-12-16 16:46:10.483048] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:22.387 [2024-12-16 16:46:10.483366] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:22.387 [2024-12-16 16:46:10.483787] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:22.387 [2024-12-16 16:46:10.483828] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 [2024-12-16 16:46:10.551981] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 Malloc0 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 [2024-12-16 16:46:10.640178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:22.387 test case1: single bdev can't be used in multiple subsystems 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 [2024-12-16 16:46:10.667675] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:22.387 [2024-12-16 16:46:10.667696] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:22.387 [2024-12-16 16:46:10.667704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.387 request: 00:40:22.387 { 00:40:22.387 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:22.387 "namespace": { 00:40:22.387 "bdev_name": "Malloc0", 00:40:22.387 "no_auto_visible": false, 00:40:22.387 "hide_metadata": false 00:40:22.387 }, 00:40:22.387 "method": "nvmf_subsystem_add_ns", 00:40:22.387 "req_id": 1 00:40:22.387 } 00:40:22.387 Got JSON-RPC error response 00:40:22.387 response: 00:40:22.387 { 00:40:22.387 "code": -32602, 00:40:22.387 "message": "Invalid parameters" 00:40:22.387 } 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:22.387 Adding namespace failed - expected result. 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:22.387 test case2: host connect to nvmf target in multiple paths 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:22.387 [2024-12-16 16:46:10.679762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:22.387 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:22.646 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:22.646 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:22.646 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:22.646 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:22.646 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:24.550 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:24.843 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:24.843 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:24.843 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:24.843 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:24.843 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:24.843 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:24.843 [global] 00:40:24.843 thread=1 00:40:24.843 invalidate=1 00:40:24.843 rw=write 00:40:24.843 time_based=1 00:40:24.843 runtime=1 00:40:24.843 ioengine=libaio 00:40:24.843 direct=1 00:40:24.843 bs=4096 00:40:24.843 iodepth=1 00:40:24.843 norandommap=0 00:40:24.843 numjobs=1 00:40:24.843 00:40:24.843 verify_dump=1 00:40:24.843 verify_backlog=512 00:40:24.843 verify_state_save=0 00:40:24.843 do_verify=1 00:40:24.843 verify=crc32c-intel 00:40:24.843 [job0] 00:40:24.843 filename=/dev/nvme0n1 00:40:24.843 Could not set queue depth (nvme0n1) 00:40:25.109 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:25.109 fio-3.35 00:40:25.109 Starting 1 thread 00:40:26.047 00:40:26.047 job0: (groupid=0, jobs=1): err= 0: pid=1270846: Mon Dec 16 16:46:14 2024 00:40:26.047 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:40:26.047 slat (nsec): min=9403, max=23734, avg=22434.23, stdev=2924.05 00:40:26.047 clat (usec): min=40434, max=41962, avg=40995.66, stdev=250.07 00:40:26.047 lat (usec): min=40443, max=41985, avg=41018.10, stdev=251.54 00:40:26.047 clat percentiles (usec): 00:40:26.047 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:26.047 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:26.048 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:26.048 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:26.048 | 99.99th=[42206] 00:40:26.048 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:40:26.048 slat (usec): min=9, max=23782, avg=57.27, stdev=1050.59 00:40:26.048 clat (usec): min=123, max=861, avg=204.72, stdev=72.15 00:40:26.048 lat (usec): min=133, max=24030, avg=261.99, stdev=1054.96 00:40:26.048 clat percentiles (usec): 00:40:26.048 | 1.00th=[ 126], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:40:26.048 | 30.00th=[ 135], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 241], 00:40:26.048 | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:40:26.048 | 99.00th=[ 260], 99.50th=[ 627], 99.90th=[ 865], 99.95th=[ 865], 00:40:26.048 | 99.99th=[ 865] 00:40:26.048 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:40:26.048 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:40:26.048 lat (usec) : 250=94.01%, 500=1.12%, 750=0.37%, 1000=0.37% 00:40:26.048 lat (msec) : 50=4.12% 00:40:26.048 cpu : usr=0.29%, sys=0.48%, ctx=536, majf=0, minf=1 00:40:26.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:26.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:26.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:26.048 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:26.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:26.048 00:40:26.048 Run status group 0 (all jobs): 00:40:26.048 READ: bw=84.6KiB/s (86.6kB/s), 84.6KiB/s-84.6KiB/s (86.6kB/s-86.6kB/s), io=88.0KiB (90.1kB), run=1040-1040msec 00:40:26.048 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:40:26.048 00:40:26.048 Disk stats (read/write): 00:40:26.048 nvme0n1: ios=44/512, merge=0/0, ticks=1727/103, in_queue=1830, util=98.20% 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:26.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:26.308 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:26.308 rmmod nvme_tcp 00:40:26.567 rmmod nvme_fabrics 00:40:26.567 rmmod nvme_keyring 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1270183 ']' 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1270183 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1270183 ']' 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1270183 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:26.567 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270183 00:40:26.567 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:26.567 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:26.567 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270183' 00:40:26.567 killing process with pid 1270183 00:40:26.567 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1270183 00:40:26.567 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1270183 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:26.826 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:28.732 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:28.733 00:40:28.733 real 0m13.137s 00:40:28.733 user 0m24.290s 00:40:28.733 sys 0m6.037s 00:40:28.733 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:28.733 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:28.733 ************************************ 00:40:28.733 END TEST nvmf_nmic 00:40:28.733 ************************************ 00:40:28.733 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:28.733 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:28.733 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:28.733 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:28.993 ************************************ 00:40:28.993 START TEST nvmf_fio_target 00:40:28.993 ************************************ 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:28.993 * Looking for test storage... 00:40:28.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:28.993 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:28.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.994 --rc genhtml_branch_coverage=1 00:40:28.994 --rc genhtml_function_coverage=1 00:40:28.994 --rc genhtml_legend=1 00:40:28.994 --rc geninfo_all_blocks=1 00:40:28.994 --rc geninfo_unexecuted_blocks=1 00:40:28.994 00:40:28.994 ' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:28.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.994 --rc genhtml_branch_coverage=1 00:40:28.994 --rc genhtml_function_coverage=1 00:40:28.994 --rc genhtml_legend=1 00:40:28.994 --rc geninfo_all_blocks=1 00:40:28.994 --rc geninfo_unexecuted_blocks=1 00:40:28.994 00:40:28.994 ' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:28.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.994 --rc genhtml_branch_coverage=1 00:40:28.994 --rc genhtml_function_coverage=1 00:40:28.994 --rc genhtml_legend=1 00:40:28.994 --rc geninfo_all_blocks=1 00:40:28.994 --rc geninfo_unexecuted_blocks=1 00:40:28.994 00:40:28.994 ' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:28.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.994 --rc genhtml_branch_coverage=1 00:40:28.994 --rc genhtml_function_coverage=1 00:40:28.994 --rc genhtml_legend=1 00:40:28.994 --rc geninfo_all_blocks=1 00:40:28.994 --rc geninfo_unexecuted_blocks=1 00:40:28.994 00:40:28.994 ' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:28.994 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:35.570 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:35.570 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:35.570 Found net devices under 0000:af:00.0: cvl_0_0 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.570 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:35.571 Found net devices under 0000:af:00.1: cvl_0_1 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:35.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:35.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:40:35.571 00:40:35.571 --- 10.0.0.2 ping statistics --- 00:40:35.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.571 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:35.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:35.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:40:35.571 00:40:35.571 --- 10.0.0.1 ping statistics --- 00:40:35.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.571 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1274477 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1274477 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1274477 ']' 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:35.571 [2024-12-16 16:46:23.477042] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:35.571 [2024-12-16 16:46:23.477948] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:35.571 [2024-12-16 16:46:23.477980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.571 [2024-12-16 16:46:23.557359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:35.571 [2024-12-16 16:46:23.580243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.571 [2024-12-16 16:46:23.580279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.571 [2024-12-16 16:46:23.580286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.571 [2024-12-16 16:46:23.580292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.571 [2024-12-16 16:46:23.580297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.571 [2024-12-16 16:46:23.581770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:35.571 [2024-12-16 16:46:23.581878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:35.571 [2024-12-16 16:46:23.581983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.571 [2024-12-16 16:46:23.581984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:35.571 [2024-12-16 16:46:23.644799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:35.571 [2024-12-16 16:46:23.645767] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:35.571 [2024-12-16 16:46:23.645871] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:35.571 [2024-12-16 16:46:23.646366] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:35.571 [2024-12-16 16:46:23.646388] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:35.571 [2024-12-16 16:46:23.882633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.571 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:35.571 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:35.571 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:35.831 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:35.831 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:36.091 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:36.091 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:36.350 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:36.350 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:36.611 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:36.611 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:36.611 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:36.872 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:36.872 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:37.131 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:37.131 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:37.391 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:37.391 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:37.391 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:37.650 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:37.650 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:37.910 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:37.910 [2024-12-16 16:46:26.502546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:38.169 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:38.169 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:38.429 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:38.688 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:38.688 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:38.688 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:38.688 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:38.688 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:38.688 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:41.227 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:41.227 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:41.227 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:41.227 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:41.227 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:41.228 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:41.228 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:41.228 [global] 00:40:41.228 thread=1 00:40:41.228 invalidate=1 00:40:41.228 rw=write 00:40:41.228 time_based=1 00:40:41.228 runtime=1 00:40:41.228 ioengine=libaio 00:40:41.228 direct=1 00:40:41.228 bs=4096 00:40:41.228 iodepth=1 00:40:41.228 norandommap=0 00:40:41.228 numjobs=1 00:40:41.228 00:40:41.228 verify_dump=1 00:40:41.228 verify_backlog=512 00:40:41.228 verify_state_save=0 00:40:41.228 do_verify=1 00:40:41.228 verify=crc32c-intel 00:40:41.228 [job0] 00:40:41.228 filename=/dev/nvme0n1 00:40:41.228 [job1] 00:40:41.228 filename=/dev/nvme0n2 00:40:41.228 [job2] 00:40:41.228 filename=/dev/nvme0n3 00:40:41.228 [job3] 00:40:41.228 filename=/dev/nvme0n4 00:40:41.228 Could not set queue depth (nvme0n1) 00:40:41.228 Could not set queue depth (nvme0n2) 00:40:41.228 Could not set queue depth (nvme0n3) 00:40:41.228 Could not set queue depth (nvme0n4) 00:40:41.228 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.228 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.228 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.228 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.228 fio-3.35 00:40:41.228 Starting 4 threads 00:40:42.607 00:40:42.607 job0: (groupid=0, jobs=1): err= 0: pid=1275676: Mon Dec 16 16:46:30 2024 00:40:42.607 read: IOPS=23, BW=92.6KiB/s (94.8kB/s)(96.0KiB/1037msec) 00:40:42.607 slat (nsec): min=9872, max=23888, avg=21798.54, stdev=3777.00 00:40:42.607 clat (usec): min=555, max=41380, avg=39312.30, stdev=8256.10 00:40:42.607 lat (usec): min=579, max=41390, avg=39334.10, stdev=8255.62 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:42.607 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:42.607 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:42.607 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:42.607 | 99.99th=[41157] 00:40:42.607 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:40:42.607 slat (nsec): min=4984, max=25496, avg=10469.21, stdev=2391.57 00:40:42.607 clat (usec): min=143, max=270, avg=168.57, stdev=14.87 00:40:42.607 lat (usec): min=152, max=289, avg=179.04, stdev=14.54 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:40:42.607 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:40:42.607 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 202], 00:40:42.607 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 273], 99.95th=[ 273], 00:40:42.607 | 99.99th=[ 273] 00:40:42.607 bw ( KiB/s): min= 4096, max= 4096, per=23.04%, avg=4096.00, stdev= 0.00, samples=1 00:40:42.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:42.607 lat (usec) : 250=95.34%, 500=0.19%, 750=0.19% 00:40:42.607 lat (msec) : 50=4.29% 00:40:42.607 cpu : usr=0.19%, sys=0.48%, ctx=538, majf=0, minf=1 00:40:42.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.607 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:42.607 job1: (groupid=0, jobs=1): err= 0: pid=1275681: Mon Dec 16 16:46:30 2024 00:40:42.607 read: IOPS=674, BW=2697KiB/s (2762kB/s)(2700KiB/1001msec) 00:40:42.607 slat (nsec): min=6765, max=35835, avg=8557.42, stdev=2794.19 00:40:42.607 clat (usec): min=177, max=41261, avg=1197.33, stdev=6008.99 00:40:42.607 lat (usec): min=185, max=41270, avg=1205.89, stdev=6010.03 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:40:42.607 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:40:42.607 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 416], 00:40:42.607 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:42.607 | 99.99th=[41157] 00:40:42.607 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:40:42.607 slat (nsec): min=9749, max=37883, avg=11809.27, stdev=1940.38 00:40:42.607 clat (usec): min=115, max=511, avg=166.04, stdev=32.99 00:40:42.607 lat (usec): min=127, max=521, avg=177.85, stdev=32.72 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:40:42.607 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 163], 60.00th=[ 176], 00:40:42.607 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 217], 00:40:42.607 | 99.00th=[ 245], 99.50th=[ 273], 99.90th=[ 490], 99.95th=[ 510], 00:40:42.607 | 99.99th=[ 510] 00:40:42.607 bw ( KiB/s): min= 4096, max= 4096, per=23.04%, avg=4096.00, stdev= 0.00, samples=1 00:40:42.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:42.607 lat (usec) : 250=81.52%, 500=17.42%, 750=0.12% 00:40:42.607 lat (msec) : 10=0.06%, 50=0.88% 00:40:42.607 cpu : usr=0.80%, sys=1.90%, ctx=1700, majf=0, minf=1 00:40:42.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.607 issued rwts: total=675,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:42.607 job2: (groupid=0, jobs=1): err= 0: pid=1275694: Mon Dec 16 16:46:30 2024 00:40:42.607 read: IOPS=1865, BW=7461KiB/s (7640kB/s)(7468KiB/1001msec) 00:40:42.607 slat (nsec): min=2852, max=31181, avg=7190.01, stdev=1551.01 00:40:42.607 clat (usec): min=188, max=41363, avg=332.34, stdev=1816.64 00:40:42.607 lat (usec): min=195, max=41373, avg=339.53, stdev=1816.72 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 241], 00:40:42.607 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:40:42.607 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 269], 00:40:42.607 | 99.00th=[ 404], 99.50th=[ 478], 99.90th=[40633], 99.95th=[41157], 00:40:42.607 | 99.99th=[41157] 00:40:42.607 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:40:42.607 slat (nsec): min=9633, max=37315, avg=10918.87, stdev=1358.99 00:40:42.607 clat (usec): min=125, max=400, avg=163.50, stdev=33.84 00:40:42.607 lat (usec): min=135, max=438, avg=174.42, stdev=34.14 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:40:42.607 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:40:42.607 | 70.00th=[ 165], 80.00th=[ 188], 90.00th=[ 212], 95.00th=[ 227], 00:40:42.607 | 99.00th=[ 285], 99.50th=[ 326], 99.90th=[ 367], 99.95th=[ 396], 00:40:42.607 | 99.99th=[ 400] 00:40:42.607 bw ( KiB/s): min= 8192, max= 8192, per=46.09%, avg=8192.00, stdev= 0.00, samples=1 00:40:42.607 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:42.607 lat (usec) : 250=82.89%, 500=16.91%, 750=0.10% 00:40:42.607 lat (msec) : 50=0.10% 00:40:42.607 cpu : usr=1.60%, sys=4.00%, ctx=3916, majf=0, minf=1 00:40:42.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.607 issued rwts: total=1867,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:42.607 job3: (groupid=0, jobs=1): err= 0: pid=1275699: Mon Dec 16 16:46:30 2024 00:40:42.607 read: IOPS=903, BW=3612KiB/s (3699kB/s)(3616KiB/1001msec) 00:40:42.607 slat (nsec): min=5980, max=23353, avg=7797.05, stdev=1942.95 00:40:42.607 clat (usec): min=181, max=41315, avg=879.61, stdev=5036.17 00:40:42.607 lat (usec): min=189, max=41322, avg=887.41, stdev=5036.52 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 225], 00:40:42.607 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:40:42.607 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 265], 00:40:42.607 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:42.607 | 99.99th=[41157] 00:40:42.607 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:40:42.607 slat (nsec): min=9844, max=46245, avg=11580.16, stdev=2268.51 00:40:42.607 clat (usec): min=138, max=457, avg=177.03, stdev=25.83 00:40:42.607 lat (usec): min=149, max=468, avg=188.61, stdev=26.42 00:40:42.607 clat percentiles (usec): 00:40:42.607 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:40:42.607 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:40:42.607 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 221], 00:40:42.607 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 424], 99.95th=[ 457], 00:40:42.607 | 99.99th=[ 457] 00:40:42.608 bw ( KiB/s): min= 4096, max= 4096, per=23.04%, avg=4096.00, stdev= 0.00, samples=1 00:40:42.608 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:42.608 lat (usec) : 250=92.95%, 500=6.28% 00:40:42.608 lat (msec) : 20=0.05%, 50=0.73% 00:40:42.608 cpu : usr=0.80%, sys=2.10%, ctx=1928, majf=0, minf=2 00:40:42.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.608 issued rwts: total=904,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:42.608 00:40:42.608 Run status group 0 (all jobs): 00:40:42.608 READ: bw=13.1MiB/s (13.7MB/s), 92.6KiB/s-7461KiB/s (94.8kB/s-7640kB/s), io=13.6MiB (14.2MB), run=1001-1037msec 00:40:42.608 WRITE: bw=17.4MiB/s (18.2MB/s), 1975KiB/s-8184KiB/s (2022kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1037msec 00:40:42.608 00:40:42.608 Disk stats (read/write): 00:40:42.608 nvme0n1: ios=68/512, merge=0/0, ticks=1059/83, in_queue=1142, util=85.97% 00:40:42.608 nvme0n2: ios=536/581, merge=0/0, ticks=1635/104, in_queue=1739, util=90.04% 00:40:42.608 nvme0n3: ios=1597/1627, merge=0/0, ticks=895/265, in_queue=1160, util=93.75% 00:40:42.608 nvme0n4: ios=775/1024, merge=0/0, ticks=707/174, in_queue=881, util=95.49% 00:40:42.608 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:42.608 [global] 00:40:42.608 thread=1 00:40:42.608 invalidate=1 00:40:42.608 rw=randwrite 00:40:42.608 time_based=1 00:40:42.608 runtime=1 00:40:42.608 ioengine=libaio 00:40:42.608 direct=1 00:40:42.608 bs=4096 00:40:42.608 iodepth=1 00:40:42.608 norandommap=0 00:40:42.608 numjobs=1 00:40:42.608 00:40:42.608 verify_dump=1 00:40:42.608 verify_backlog=512 00:40:42.608 verify_state_save=0 00:40:42.608 do_verify=1 00:40:42.608 verify=crc32c-intel 00:40:42.608 [job0] 00:40:42.608 filename=/dev/nvme0n1 00:40:42.608 [job1] 00:40:42.608 filename=/dev/nvme0n2 00:40:42.608 [job2] 00:40:42.608 filename=/dev/nvme0n3 00:40:42.608 [job3] 00:40:42.608 filename=/dev/nvme0n4 00:40:42.608 Could not set queue depth (nvme0n1) 00:40:42.608 Could not set queue depth (nvme0n2) 00:40:42.608 Could not set queue depth (nvme0n3) 00:40:42.608 Could not set queue depth (nvme0n4) 00:40:42.608 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:42.608 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:42.608 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:42.608 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:42.608 fio-3.35 00:40:42.608 Starting 4 threads 00:40:43.987 00:40:43.987 job0: (groupid=0, jobs=1): err= 0: pid=1276090: Mon Dec 16 16:46:32 2024 00:40:43.987 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:40:43.987 slat (nsec): min=9859, max=24855, avg=22710.64, stdev=3196.33 00:40:43.987 clat (usec): min=40550, max=41031, avg=40946.70, stdev=97.55 00:40:43.987 lat (usec): min=40560, max=41054, avg=40969.41, stdev=100.30 00:40:43.987 clat percentiles (usec): 00:40:43.987 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:43.987 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:43.987 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:43.988 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:43.988 | 99.99th=[41157] 00:40:43.988 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:40:43.988 slat (nsec): min=10770, max=45386, avg=12500.04, stdev=2509.77 00:40:43.988 clat (usec): min=143, max=364, avg=187.77, stdev=23.70 00:40:43.988 lat (usec): min=155, max=375, avg=200.27, stdev=24.49 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:40:43.988 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:40:43.988 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 241], 00:40:43.988 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 363], 99.95th=[ 363], 00:40:43.988 | 99.99th=[ 363] 00:40:43.988 bw ( KiB/s): min= 4096, max= 4096, per=29.11%, avg=4096.00, stdev= 0.00, samples=1 00:40:43.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:43.988 lat (usec) : 250=94.19%, 500=1.69% 00:40:43.988 lat (msec) : 50=4.12% 00:40:43.988 cpu : usr=0.50%, sys=0.89%, ctx=536, majf=0, minf=1 00:40:43.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:43.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:43.988 job1: (groupid=0, jobs=1): err= 0: pid=1276109: Mon Dec 16 16:46:32 2024 00:40:43.988 read: IOPS=1109, BW=4439KiB/s (4546kB/s)(4488KiB/1011msec) 00:40:43.988 slat (nsec): min=6891, max=44672, avg=8143.10, stdev=2105.75 00:40:43.988 clat (usec): min=176, max=40997, avg=659.24, stdev=4173.67 00:40:43.988 lat (usec): min=189, max=41009, avg=667.38, stdev=4174.34 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:40:43.988 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 239], 60.00th=[ 243], 00:40:43.988 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:40:43.988 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:43.988 | 99.99th=[41157] 00:40:43.988 write: IOPS=1519, BW=6077KiB/s (6223kB/s)(6144KiB/1011msec); 0 zone resets 00:40:43.988 slat (nsec): min=9760, max=74361, avg=10858.50, stdev=2338.00 00:40:43.988 clat (usec): min=115, max=321, avg=154.42, stdev=20.15 00:40:43.988 lat (usec): min=137, max=395, avg=165.28, stdev=20.84 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 137], 00:40:43.988 | 30.00th=[ 141], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:40:43.988 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:40:43.988 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 322], 00:40:43.988 | 99.99th=[ 322] 00:40:43.988 bw ( KiB/s): min=12288, max=12288, per=87.34%, avg=12288.00, stdev= 0.00, samples=1 00:40:43.988 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:43.988 lat (usec) : 250=93.27%, 500=6.28% 00:40:43.988 lat (msec) : 50=0.45% 00:40:43.988 cpu : usr=3.07%, sys=3.17%, ctx=2658, majf=0, minf=2 00:40:43.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:43.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 issued rwts: total=1122,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:43.988 job2: (groupid=0, jobs=1): err= 0: pid=1276136: Mon Dec 16 16:46:32 2024 00:40:43.988 read: IOPS=521, BW=2085KiB/s (2135kB/s)(2116KiB/1015msec) 00:40:43.988 slat (nsec): min=7443, max=26952, avg=8871.31, stdev=2813.31 00:40:43.988 clat (usec): min=190, max=41485, avg=1533.53, stdev=7200.56 00:40:43.988 lat (usec): min=198, max=41512, avg=1542.41, stdev=7203.15 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:40:43.988 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:40:43.988 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 318], 00:40:43.988 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:40:43.988 | 99.99th=[41681] 00:40:43.988 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:40:43.988 slat (nsec): min=10843, max=38853, avg=12214.12, stdev=1929.93 00:40:43.988 clat (usec): min=141, max=331, avg=173.96, stdev=16.19 00:40:43.988 lat (usec): min=152, max=359, avg=186.18, stdev=16.87 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:40:43.988 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:40:43.988 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:40:43.988 | 99.00th=[ 215], 99.50th=[ 243], 99.90th=[ 322], 99.95th=[ 334], 00:40:43.988 | 99.99th=[ 334] 00:40:43.988 bw ( KiB/s): min= 8192, max= 8192, per=58.23%, avg=8192.00, stdev= 0.00, samples=1 00:40:43.988 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:43.988 lat (usec) : 250=96.78%, 500=2.12% 00:40:43.988 lat (msec) : 50=1.09% 00:40:43.988 cpu : usr=0.99%, sys=2.96%, ctx=1554, majf=0, minf=1 00:40:43.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:43.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:43.988 job3: (groupid=0, jobs=1): err= 0: pid=1276142: Mon Dec 16 16:46:32 2024 00:40:43.988 read: IOPS=46, BW=184KiB/s (189kB/s)(188KiB/1019msec) 00:40:43.988 slat (nsec): min=7150, max=23636, avg=14940.87, stdev=7463.17 00:40:43.988 clat (usec): min=209, max=41603, avg=19335.39, stdev=20579.99 00:40:43.988 lat (usec): min=216, max=41611, avg=19350.33, stdev=20581.19 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 210], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 219], 00:40:43.988 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[41157], 00:40:43.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:43.988 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:43.988 | 99.99th=[41681] 00:40:43.988 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:40:43.988 slat (nsec): min=7880, max=31653, avg=10608.50, stdev=1234.40 00:40:43.988 clat (usec): min=141, max=418, avg=194.11, stdev=35.91 00:40:43.988 lat (usec): min=152, max=450, avg=204.71, stdev=35.92 00:40:43.988 clat percentiles (usec): 00:40:43.988 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:40:43.988 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:40:43.988 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 227], 95.00th=[ 260], 00:40:43.988 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 420], 99.95th=[ 420], 00:40:43.988 | 99.99th=[ 420] 00:40:43.988 bw ( KiB/s): min= 4096, max= 4096, per=29.11%, avg=4096.00, stdev= 0.00, samples=1 00:40:43.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:43.988 lat (usec) : 250=89.80%, 500=6.26% 00:40:43.988 lat (msec) : 50=3.94% 00:40:43.988 cpu : usr=0.29%, sys=0.49%, ctx=560, majf=0, minf=1 00:40:43.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:43.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.988 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:43.988 00:40:43.988 Run status group 0 (all jobs): 00:40:43.988 READ: bw=6752KiB/s (6914kB/s), 87.2KiB/s-4439KiB/s (89.3kB/s-4546kB/s), io=6880KiB (7045kB), run=1009-1019msec 00:40:43.988 WRITE: bw=13.7MiB/s (14.4MB/s), 2010KiB/s-6077KiB/s (2058kB/s-6223kB/s), io=14.0MiB (14.7MB), run=1009-1019msec 00:40:43.988 00:40:43.988 Disk stats (read/write): 00:40:43.988 nvme0n1: ios=67/512, merge=0/0, ticks=1199/96, in_queue=1295, util=89.08% 00:40:43.988 nvme0n2: ios=1160/1536, merge=0/0, ticks=577/227, in_queue=804, util=86.02% 00:40:43.988 nvme0n3: ios=562/1024, merge=0/0, ticks=1396/167, in_queue=1563, util=96.21% 00:40:43.988 nvme0n4: ios=60/512, merge=0/0, ticks=1566/97, in_queue=1663, util=97.46% 00:40:43.988 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:43.988 [global] 00:40:43.988 thread=1 00:40:43.988 invalidate=1 00:40:43.988 rw=write 00:40:43.988 time_based=1 00:40:43.988 runtime=1 00:40:43.988 ioengine=libaio 00:40:43.988 direct=1 00:40:43.988 bs=4096 00:40:43.988 iodepth=128 00:40:43.988 norandommap=0 00:40:43.988 numjobs=1 00:40:43.988 00:40:43.988 verify_dump=1 00:40:43.988 verify_backlog=512 00:40:43.988 verify_state_save=0 00:40:43.988 do_verify=1 00:40:43.988 verify=crc32c-intel 00:40:43.988 [job0] 00:40:43.988 filename=/dev/nvme0n1 00:40:43.988 [job1] 00:40:43.988 filename=/dev/nvme0n2 00:40:43.988 [job2] 00:40:43.988 filename=/dev/nvme0n3 00:40:43.988 [job3] 00:40:43.988 filename=/dev/nvme0n4 00:40:43.988 Could not set queue depth (nvme0n1) 00:40:43.988 Could not set queue depth (nvme0n2) 00:40:43.988 Could not set queue depth (nvme0n3) 00:40:43.988 Could not set queue depth (nvme0n4) 00:40:44.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:44.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:44.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:44.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:44.248 fio-3.35 00:40:44.248 Starting 4 threads 00:40:45.630 00:40:45.630 job0: (groupid=0, jobs=1): err= 0: pid=1276506: Mon Dec 16 16:46:34 2024 00:40:45.630 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:40:45.630 slat (nsec): min=1036, max=18847k, avg=123620.64, stdev=884831.02 00:40:45.630 clat (usec): min=6812, max=55979, avg=16241.64, stdev=8715.46 00:40:45.630 lat (usec): min=6926, max=55984, avg=16365.26, stdev=8765.80 00:40:45.630 clat percentiles (usec): 00:40:45.630 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10814], 00:40:45.630 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12125], 60.00th=[13698], 00:40:45.630 | 70.00th=[16909], 80.00th=[21627], 90.00th=[25822], 95.00th=[37487], 00:40:45.630 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:40:45.630 | 99.99th=[55837] 00:40:45.630 write: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1002msec); 0 zone resets 00:40:45.630 slat (nsec): min=1867, max=13992k, avg=111308.63, stdev=743035.03 00:40:45.630 clat (usec): min=581, max=41785, avg=13901.12, stdev=5095.98 00:40:45.630 lat (usec): min=6472, max=42357, avg=14012.43, stdev=5158.16 00:40:45.630 clat percentiles (usec): 00:40:45.630 | 1.00th=[ 8029], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10421], 00:40:45.630 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11994], 60.00th=[12125], 00:40:45.630 | 70.00th=[14615], 80.00th=[18744], 90.00th=[20841], 95.00th=[25560], 00:40:45.630 | 99.00th=[29754], 99.50th=[32637], 99.90th=[39060], 99.95th=[39060], 00:40:45.630 | 99.99th=[41681] 00:40:45.630 bw ( KiB/s): min=18192, max=18192, per=24.06%, avg=18192.00, stdev= 0.00, samples=1 00:40:45.630 iops : min= 4548, max= 4548, avg=4548.00, stdev= 0.00, samples=1 00:40:45.630 lat (usec) : 750=0.01% 00:40:45.630 lat (msec) : 10=9.62%, 20=72.03%, 50=17.53%, 100=0.81% 00:40:45.630 cpu : usr=1.80%, sys=3.80%, ctx=458, majf=0, minf=1 00:40:45.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:45.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:45.630 issued rwts: total=4096,4276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:45.630 job1: (groupid=0, jobs=1): err= 0: pid=1276507: Mon Dec 16 16:46:34 2024 00:40:45.630 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:45.630 slat (nsec): min=1034, max=12905k, avg=100203.10, stdev=768236.47 00:40:45.630 clat (usec): min=2764, max=46117, avg=13922.99, stdev=5199.63 00:40:45.630 lat (usec): min=2771, max=46120, avg=14023.19, stdev=5247.80 00:40:45.630 clat percentiles (usec): 00:40:45.630 | 1.00th=[ 3130], 5.00th=[ 7242], 10.00th=[ 8717], 20.00th=[ 9896], 00:40:45.630 | 30.00th=[10421], 40.00th=[11600], 50.00th=[12911], 60.00th=[14353], 00:40:45.630 | 70.00th=[15795], 80.00th=[17957], 90.00th=[21627], 95.00th=[23725], 00:40:45.630 | 99.00th=[26084], 99.50th=[26084], 99.90th=[43779], 99.95th=[43779], 00:40:45.630 | 99.99th=[45876] 00:40:45.630 write: IOPS=5088, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:40:45.630 slat (nsec): min=1956, max=16807k, avg=91384.00, stdev=733217.96 00:40:45.630 clat (usec): min=539, max=30522, avg=12314.42, stdev=3902.33 00:40:45.630 lat (usec): min=1370, max=31380, avg=12405.81, stdev=3966.84 00:40:45.630 clat percentiles (usec): 00:40:45.630 | 1.00th=[ 4948], 5.00th=[ 6718], 10.00th=[ 8291], 20.00th=[ 9896], 00:40:45.630 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:40:45.630 | 70.00th=[13435], 80.00th=[15270], 90.00th=[18744], 95.00th=[19530], 00:40:45.630 | 99.00th=[23725], 99.50th=[23725], 99.90th=[27132], 99.95th=[28181], 00:40:45.630 | 99.99th=[30540] 00:40:45.630 bw ( KiB/s): min=18866, max=20904, per=26.30%, avg=19885.00, stdev=1441.08, samples=2 00:40:45.630 iops : min= 4716, max= 5226, avg=4971.00, stdev=360.62, samples=2 00:40:45.630 lat (usec) : 750=0.01% 00:40:45.630 lat (msec) : 2=0.24%, 4=1.09%, 10=20.43%, 20=69.10%, 50=9.13% 00:40:45.630 cpu : usr=3.39%, sys=5.49%, ctx=379, majf=0, minf=1 00:40:45.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:45.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:45.631 issued rwts: total=4608,5104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:45.631 job2: (groupid=0, jobs=1): err= 0: pid=1276508: Mon Dec 16 16:46:34 2024 00:40:45.631 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:40:45.631 slat (nsec): min=1075, max=5797.6k, avg=97484.71, stdev=476587.80 00:40:45.631 clat (usec): min=8806, max=19143, avg=12919.49, stdev=1477.02 00:40:45.631 lat (usec): min=9183, max=19152, avg=13016.98, stdev=1428.98 00:40:45.631 clat percentiles (usec): 00:40:45.631 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11207], 20.00th=[11731], 00:40:45.631 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:40:45.631 | 70.00th=[13698], 80.00th=[14353], 90.00th=[14746], 95.00th=[15401], 00:40:45.631 | 99.00th=[16450], 99.50th=[16909], 99.90th=[19268], 99.95th=[19268], 00:40:45.631 | 99.99th=[19268] 00:40:45.631 write: IOPS=4443, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1003msec); 0 zone resets 00:40:45.631 slat (usec): min=2, max=13671, avg=127.27, stdev=708.47 00:40:45.631 clat (usec): min=295, max=125381, avg=16387.01, stdev=18209.37 00:40:45.631 lat (msec): min=2, max=125, avg=16.51, stdev=18.33 00:40:45.631 clat percentiles (msec): 00:40:45.631 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:40:45.631 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:40:45.631 | 70.00th=[ 14], 80.00th=[ 14], 90.00th=[ 16], 95.00th=[ 54], 00:40:45.631 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 126], 00:40:45.631 | 99.99th=[ 126] 00:40:45.631 bw ( KiB/s): min=16064, max=18568, per=22.90%, avg=17316.00, stdev=1770.60, samples=2 00:40:45.631 iops : min= 4016, max= 4642, avg=4329.00, stdev=442.65, samples=2 00:40:45.631 lat (usec) : 500=0.01% 00:40:45.631 lat (msec) : 2=0.01%, 4=0.39%, 10=7.62%, 20=88.39%, 50=0.97% 00:40:45.631 lat (msec) : 100=1.59%, 250=1.02% 00:40:45.631 cpu : usr=1.90%, sys=3.69%, ctx=524, majf=0, minf=1 00:40:45.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:45.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:45.631 issued rwts: total=4096,4457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:45.631 job3: (groupid=0, jobs=1): err= 0: pid=1276509: Mon Dec 16 16:46:34 2024 00:40:45.631 read: IOPS=4851, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1003msec) 00:40:45.631 slat (nsec): min=1129, max=15506k, avg=103667.58, stdev=695720.83 00:40:45.631 clat (usec): min=1630, max=62034, avg=13665.26, stdev=4635.93 00:40:45.631 lat (usec): min=5892, max=62039, avg=13768.93, stdev=4653.70 00:40:45.631 clat percentiles (usec): 00:40:45.631 | 1.00th=[ 7635], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11207], 00:40:45.631 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13435], 00:40:45.631 | 70.00th=[14091], 80.00th=[14484], 90.00th=[17171], 95.00th=[22152], 00:40:45.631 | 99.00th=[30016], 99.50th=[30016], 99.90th=[57934], 99.95th=[57934], 00:40:45.631 | 99.99th=[62129] 00:40:45.631 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:40:45.631 slat (nsec): min=1882, max=9930.3k, avg=83646.88, stdev=532024.00 00:40:45.631 clat (usec): min=4162, max=21503, avg=11807.68, stdev=1807.42 00:40:45.631 lat (usec): min=4199, max=21523, avg=11891.33, stdev=1836.34 00:40:45.631 clat percentiles (usec): 00:40:45.631 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[10814], 00:40:45.631 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:40:45.631 | 70.00th=[12911], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:40:45.631 | 99.00th=[15401], 99.50th=[15664], 99.90th=[19006], 99.95th=[20055], 00:40:45.631 | 99.99th=[21627] 00:40:45.631 bw ( KiB/s): min=20480, max=20480, per=27.09%, avg=20480.00, stdev= 0.00, samples=2 00:40:45.631 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:40:45.631 lat (msec) : 2=0.01%, 10=10.44%, 20=86.66%, 50=2.74%, 100=0.14% 00:40:45.631 cpu : usr=3.99%, sys=4.29%, ctx=435, majf=0, minf=1 00:40:45.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:45.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:45.631 issued rwts: total=4866,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:45.631 00:40:45.631 Run status group 0 (all jobs): 00:40:45.631 READ: bw=68.8MiB/s (72.1MB/s), 16.0MiB/s-19.0MiB/s (16.7MB/s-19.9MB/s), io=69.0MiB (72.4MB), run=1002-1003msec 00:40:45.631 WRITE: bw=73.8MiB/s (77.4MB/s), 16.7MiB/s-19.9MiB/s (17.5MB/s-20.9MB/s), io=74.1MiB (77.6MB), run=1002-1003msec 00:40:45.631 00:40:45.631 Disk stats (read/write): 00:40:45.631 nvme0n1: ios=3667/4096, merge=0/0, ticks=21721/21154, in_queue=42875, util=96.89% 00:40:45.631 nvme0n2: ios=3893/4096, merge=0/0, ticks=41001/38090, in_queue=79091, util=86.11% 00:40:45.631 nvme0n3: ios=3228/3584, merge=0/0, ticks=11893/17398, in_queue=29291, util=98.62% 00:40:45.631 nvme0n4: ios=4082/4096, merge=0/0, ticks=31047/24690, in_queue=55737, util=89.53% 00:40:45.631 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:45.631 [global] 00:40:45.631 thread=1 00:40:45.631 invalidate=1 00:40:45.631 rw=randwrite 00:40:45.631 time_based=1 00:40:45.631 runtime=1 00:40:45.631 ioengine=libaio 00:40:45.631 direct=1 00:40:45.631 bs=4096 00:40:45.631 iodepth=128 00:40:45.631 norandommap=0 00:40:45.631 numjobs=1 00:40:45.631 00:40:45.631 verify_dump=1 00:40:45.631 verify_backlog=512 00:40:45.631 verify_state_save=0 00:40:45.631 do_verify=1 00:40:45.631 verify=crc32c-intel 00:40:45.631 [job0] 00:40:45.631 filename=/dev/nvme0n1 00:40:45.631 [job1] 00:40:45.631 filename=/dev/nvme0n2 00:40:45.631 [job2] 00:40:45.631 filename=/dev/nvme0n3 00:40:45.631 [job3] 00:40:45.631 filename=/dev/nvme0n4 00:40:45.631 Could not set queue depth (nvme0n1) 00:40:45.631 Could not set queue depth (nvme0n2) 00:40:45.631 Could not set queue depth (nvme0n3) 00:40:45.631 Could not set queue depth (nvme0n4) 00:40:45.891 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:45.891 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:45.891 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:45.891 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:45.891 fio-3.35 00:40:45.891 Starting 4 threads 00:40:47.291 00:40:47.291 job0: (groupid=0, jobs=1): err= 0: pid=1276871: Mon Dec 16 16:46:35 2024 00:40:47.291 read: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(23.0MiB/1001msec) 00:40:47.291 slat (nsec): min=1581, max=5107.8k, avg=78666.52, stdev=479375.07 00:40:47.291 clat (usec): min=689, max=15149, avg=10132.88, stdev=1624.37 00:40:47.291 lat (usec): min=4382, max=15323, avg=10211.55, stdev=1646.21 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[ 6259], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 8848], 00:40:47.291 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10290], 00:40:47.291 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12387], 95.00th=[13042], 00:40:47.291 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14615], 99.95th=[15008], 00:40:47.291 | 99.99th=[15139] 00:40:47.291 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:40:47.291 slat (usec): min=2, max=21825, avg=81.23, stdev=563.90 00:40:47.291 clat (usec): min=883, max=44124, avg=10927.50, stdev=3845.52 00:40:47.291 lat (usec): min=919, max=44237, avg=11008.72, stdev=3888.77 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[ 4686], 5.00th=[ 7504], 10.00th=[ 9372], 20.00th=[ 9634], 00:40:47.291 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:40:47.291 | 70.00th=[10290], 80.00th=[10421], 90.00th=[13042], 95.00th=[23462], 00:40:47.291 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28181], 99.95th=[28181], 00:40:47.291 | 99.99th=[44303] 00:40:47.291 bw ( KiB/s): min=24576, max=24576, per=34.00%, avg=24576.00, stdev= 0.00, samples=1 00:40:47.291 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:40:47.291 lat (usec) : 750=0.01%, 1000=0.02% 00:40:47.291 lat (msec) : 10=46.49%, 20=49.88%, 50=3.61% 00:40:47.291 cpu : usr=4.10%, sys=8.00%, ctx=623, majf=0, minf=1 00:40:47.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:47.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:47.291 issued rwts: total=5885,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:47.291 job1: (groupid=0, jobs=1): err= 0: pid=1276872: Mon Dec 16 16:46:35 2024 00:40:47.291 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:40:47.291 slat (nsec): min=1349, max=4692.1k, avg=77936.77, stdev=472185.80 00:40:47.291 clat (usec): min=3409, max=16035, avg=10231.76, stdev=1625.78 00:40:47.291 lat (usec): min=3416, max=16041, avg=10309.70, stdev=1641.22 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[ 6915], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 8848], 00:40:47.291 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10683], 00:40:47.291 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12256], 95.00th=[12911], 00:40:47.291 | 99.00th=[13829], 99.50th=[14222], 99.90th=[14746], 99.95th=[14746], 00:40:47.291 | 99.99th=[16057] 00:40:47.291 write: IOPS=6133, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:40:47.291 slat (usec): min=2, max=21177, avg=78.23, stdev=512.81 00:40:47.291 clat (usec): min=454, max=35336, avg=10420.44, stdev=2974.84 00:40:47.291 lat (usec): min=3380, max=35355, avg=10498.67, stdev=3003.93 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[ 6783], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9765], 00:40:47.291 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:40:47.291 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[12911], 00:40:47.291 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:40:47.291 | 99.99th=[35390] 00:40:47.291 bw ( KiB/s): min=24576, max=24576, per=34.00%, avg=24576.00, stdev= 0.00, samples=2 00:40:47.291 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:40:47.291 lat (usec) : 500=0.01% 00:40:47.291 lat (msec) : 4=0.31%, 10=44.29%, 20=54.36%, 50=1.03% 00:40:47.291 cpu : usr=5.89%, sys=7.49%, ctx=523, majf=0, minf=2 00:40:47.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:47.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:47.291 issued rwts: total=6144,6146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:47.291 job2: (groupid=0, jobs=1): err= 0: pid=1276873: Mon Dec 16 16:46:35 2024 00:40:47.291 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:40:47.291 slat (nsec): min=1398, max=15740k, avg=145288.99, stdev=993251.83 00:40:47.291 clat (usec): min=7077, max=66038, avg=18119.64, stdev=8079.39 00:40:47.291 lat (usec): min=7088, max=66044, avg=18264.92, stdev=8155.37 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[ 9634], 5.00th=[11338], 10.00th=[12256], 20.00th=[12911], 00:40:47.291 | 30.00th=[14091], 40.00th=[14877], 50.00th=[16188], 60.00th=[17433], 00:40:47.291 | 70.00th=[18744], 80.00th=[20579], 90.00th=[24249], 95.00th=[33817], 00:40:47.291 | 99.00th=[52167], 99.50th=[58983], 99.90th=[65799], 99.95th=[65799], 00:40:47.291 | 99.99th=[65799] 00:40:47.291 write: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1007msec); 0 zone resets 00:40:47.291 slat (usec): min=2, max=18148, avg=170.61, stdev=972.45 00:40:47.291 clat (usec): min=1506, max=66039, avg=23023.27, stdev=13136.06 00:40:47.291 lat (usec): min=1521, max=66048, avg=23193.89, stdev=13229.37 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[10290], 20.00th=[13566], 00:40:47.291 | 30.00th=[15008], 40.00th=[17957], 50.00th=[21365], 60.00th=[22676], 00:40:47.291 | 70.00th=[22938], 80.00th=[24511], 90.00th=[50594], 95.00th=[55313], 00:40:47.291 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[65799], 00:40:47.291 | 99.99th=[65799] 00:40:47.291 bw ( KiB/s): min= 9432, max=15144, per=17.00%, avg=12288.00, stdev=4038.99, samples=2 00:40:47.291 iops : min= 2358, max= 3786, avg=3072.00, stdev=1009.75, samples=2 00:40:47.291 lat (msec) : 2=0.13%, 4=0.02%, 10=5.29%, 20=54.83%, 50=33.06% 00:40:47.291 lat (msec) : 100=6.67% 00:40:47.291 cpu : usr=3.48%, sys=3.98%, ctx=279, majf=0, minf=2 00:40:47.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:40:47.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:47.291 issued rwts: total=3072,3131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:47.291 job3: (groupid=0, jobs=1): err= 0: pid=1276874: Mon Dec 16 16:46:35 2024 00:40:47.291 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:40:47.291 slat (nsec): min=1467, max=21646k, avg=215878.80, stdev=1450778.49 00:40:47.291 clat (usec): min=11456, max=60784, avg=26791.59, stdev=12238.78 00:40:47.291 lat (usec): min=11461, max=61076, avg=27007.47, stdev=12353.47 00:40:47.291 clat percentiles (usec): 00:40:47.291 | 1.00th=[11469], 5.00th=[13698], 10.00th=[13960], 20.00th=[14484], 00:40:47.291 | 30.00th=[15139], 40.00th=[17957], 50.00th=[24249], 60.00th=[30802], 00:40:47.291 | 70.00th=[35914], 80.00th=[38011], 90.00th=[41157], 95.00th=[48497], 00:40:47.291 | 99.00th=[57410], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:40:47.292 | 99.99th=[60556] 00:40:47.292 write: IOPS=2769, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1008msec); 0 zone resets 00:40:47.292 slat (usec): min=2, max=18349, avg=151.43, stdev=895.09 00:40:47.292 clat (usec): min=2947, max=62573, avg=21137.42, stdev=10715.64 00:40:47.292 lat (usec): min=7149, max=62579, avg=21288.85, stdev=10771.61 00:40:47.292 clat percentiles (usec): 00:40:47.292 | 1.00th=[ 9765], 5.00th=[10028], 10.00th=[13304], 20.00th=[13698], 00:40:47.292 | 30.00th=[13960], 40.00th=[14222], 50.00th=[20579], 60.00th=[22676], 00:40:47.292 | 70.00th=[22938], 80.00th=[23725], 90.00th=[36963], 95.00th=[46924], 00:40:47.292 | 99.00th=[58459], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:40:47.292 | 99.99th=[62653] 00:40:47.292 bw ( KiB/s): min=10232, max=11080, per=14.74%, avg=10656.00, stdev=599.63, samples=2 00:40:47.292 iops : min= 2558, max= 2770, avg=2664.00, stdev=149.91, samples=2 00:40:47.292 lat (msec) : 4=0.02%, 10=2.58%, 20=43.59%, 50=50.09%, 100=3.72% 00:40:47.292 cpu : usr=2.18%, sys=4.47%, ctx=243, majf=0, minf=1 00:40:47.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:40:47.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:47.292 issued rwts: total=2560,2792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:47.292 00:40:47.292 Run status group 0 (all jobs): 00:40:47.292 READ: bw=68.4MiB/s (71.8MB/s), 9.92MiB/s-24.0MiB/s (10.4MB/s-25.1MB/s), io=69.0MiB (72.3MB), run=1001-1008msec 00:40:47.292 WRITE: bw=70.6MiB/s (74.0MB/s), 10.8MiB/s-24.0MiB/s (11.3MB/s-25.1MB/s), io=71.1MiB (74.6MB), run=1001-1008msec 00:40:47.292 00:40:47.292 Disk stats (read/write): 00:40:47.292 nvme0n1: ios=5075/5120, merge=0/0, ticks=25949/30002, in_queue=55951, util=97.90% 00:40:47.292 nvme0n2: ios=5125/5296, merge=0/0, ticks=25571/25403, in_queue=50974, util=86.79% 00:40:47.292 nvme0n3: ios=2357/2560, merge=0/0, ticks=42123/63124, in_queue=105247, util=88.96% 00:40:47.292 nvme0n4: ios=2260/2560, merge=0/0, ticks=28433/27676, in_queue=56109, util=97.27% 00:40:47.292 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:47.292 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1277045 00:40:47.292 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:47.292 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:47.292 [global] 00:40:47.292 thread=1 00:40:47.292 invalidate=1 00:40:47.292 rw=read 00:40:47.292 time_based=1 00:40:47.292 runtime=10 00:40:47.292 ioengine=libaio 00:40:47.292 direct=1 00:40:47.292 bs=4096 00:40:47.292 iodepth=1 00:40:47.292 norandommap=1 00:40:47.292 numjobs=1 00:40:47.292 00:40:47.292 [job0] 00:40:47.292 filename=/dev/nvme0n1 00:40:47.292 [job1] 00:40:47.292 filename=/dev/nvme0n2 00:40:47.292 [job2] 00:40:47.292 filename=/dev/nvme0n3 00:40:47.292 [job3] 00:40:47.292 filename=/dev/nvme0n4 00:40:47.292 Could not set queue depth (nvme0n1) 00:40:47.292 Could not set queue depth (nvme0n2) 00:40:47.292 Could not set queue depth (nvme0n3) 00:40:47.292 Could not set queue depth (nvme0n4) 00:40:47.549 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:47.549 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:47.549 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:47.549 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:47.549 fio-3.35 00:40:47.549 Starting 4 threads 00:40:50.072 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:50.330 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28897280, buflen=4096 00:40:50.330 fio: pid=1277245, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:50.330 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:50.587 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:50.587 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:50.587 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=307200, buflen=4096 00:40:50.587 fio: pid=1277244, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:50.845 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49000448, buflen=4096 00:40:50.845 fio: pid=1277242, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:50.845 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:50.845 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:50.845 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52408320, buflen=4096 00:40:50.845 fio: pid=1277243, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:50.845 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:50.845 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:50.845 00:40:50.845 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277242: Mon Dec 16 16:46:39 2024 00:40:50.845 read: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(46.7MiB/3134msec) 00:40:50.845 slat (usec): min=7, max=28943, avg=16.09, stdev=391.11 00:40:50.845 clat (usec): min=176, max=871, avg=241.99, stdev=16.90 00:40:50.845 lat (usec): min=184, max=29336, avg=258.08, stdev=393.69 00:40:50.845 clat percentiles (usec): 00:40:50.845 | 1.00th=[ 188], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 237], 00:40:50.845 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:40:50.845 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:40:50.845 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 392], 00:40:50.845 | 99.99th=[ 545] 00:40:50.845 bw ( KiB/s): min=14863, max=15520, per=40.31%, avg=15405.17, stdev=265.72, samples=6 00:40:50.845 iops : min= 3715, max= 3880, avg=3851.17, stdev=66.74, samples=6 00:40:50.845 lat (usec) : 250=74.86%, 500=25.11%, 750=0.02%, 1000=0.01% 00:40:50.845 cpu : usr=2.36%, sys=6.67%, ctx=11968, majf=0, minf=1 00:40:50.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 issued rwts: total=11964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:50.846 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277243: Mon Dec 16 16:46:39 2024 00:40:50.846 read: IOPS=3833, BW=15.0MiB/s (15.7MB/s)(50.0MiB/3338msec) 00:40:50.846 slat (usec): min=6, max=14970, avg=11.11, stdev=214.79 00:40:50.846 clat (usec): min=179, max=1549, avg=246.93, stdev=42.51 00:40:50.846 lat (usec): min=186, max=15417, avg=258.04, stdev=222.35 00:40:50.846 clat percentiles (usec): 00:40:50.846 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 219], 00:40:50.846 | 30.00th=[ 227], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:40:50.846 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 289], 95.00th=[ 306], 00:40:50.846 | 99.00th=[ 478], 99.50th=[ 486], 99.90th=[ 510], 99.95th=[ 519], 00:40:50.846 | 99.99th=[ 734] 00:40:50.846 bw ( KiB/s): min=15096, max=15304, per=39.74%, avg=15187.33, stdev=78.56, samples=6 00:40:50.846 iops : min= 3774, max= 3826, avg=3796.83, stdev=19.64, samples=6 00:40:50.846 lat (usec) : 250=60.27%, 500=39.57%, 750=0.14% 00:40:50.846 lat (msec) : 2=0.01% 00:40:50.846 cpu : usr=1.20%, sys=3.36%, ctx=12803, majf=0, minf=2 00:40:50.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 issued rwts: total=12796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:50.846 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277244: Mon Dec 16 16:46:39 2024 00:40:50.846 read: IOPS=25, BW=102KiB/s (104kB/s)(300KiB/2943msec) 00:40:50.846 slat (nsec): min=7503, max=31484, avg=17797.11, stdev=6113.69 00:40:50.846 clat (usec): min=306, max=42031, avg=38936.81, stdev=9199.47 00:40:50.846 lat (usec): min=337, max=42052, avg=38954.72, stdev=9197.96 00:40:50.846 clat percentiles (usec): 00:40:50.846 | 1.00th=[ 306], 5.00th=[ 619], 10.00th=[40633], 20.00th=[41157], 00:40:50.846 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:50.846 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:40:50.846 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:50.846 | 99.99th=[42206] 00:40:50.846 bw ( KiB/s): min= 96, max= 104, per=0.26%, avg=100.80, stdev= 4.38, samples=5 00:40:50.846 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:40:50.846 lat (usec) : 500=3.95%, 750=1.32% 00:40:50.846 lat (msec) : 50=93.42% 00:40:50.846 cpu : usr=0.07%, sys=0.00%, ctx=78, majf=0, minf=2 00:40:50.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:50.846 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277245: Mon Dec 16 16:46:39 2024 00:40:50.846 read: IOPS=2587, BW=10.1MiB/s (10.6MB/s)(27.6MiB/2727msec) 00:40:50.846 slat (nsec): min=7059, max=41528, avg=8225.51, stdev=1426.03 00:40:50.846 clat (usec): min=197, max=557, avg=372.67, stdev=86.86 00:40:50.846 lat (usec): min=204, max=565, avg=380.90, stdev=86.97 00:40:50.846 clat percentiles (usec): 00:40:50.846 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 260], 00:40:50.846 | 30.00th=[ 285], 40.00th=[ 404], 50.00th=[ 408], 60.00th=[ 412], 00:40:50.846 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 490], 00:40:50.846 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 515], 99.95th=[ 519], 00:40:50.846 | 99.99th=[ 562] 00:40:50.846 bw ( KiB/s): min= 9032, max=13184, per=27.51%, avg=10512.00, stdev=2023.45, samples=5 00:40:50.846 iops : min= 2258, max= 3296, avg=2628.00, stdev=505.86, samples=5 00:40:50.846 lat (usec) : 250=12.37%, 500=85.53%, 750=2.08% 00:40:50.846 cpu : usr=1.50%, sys=4.22%, ctx=7056, majf=0, minf=2 00:40:50.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.846 issued rwts: total=7056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:50.846 00:40:50.846 Run status group 0 (all jobs): 00:40:50.846 READ: bw=37.3MiB/s (39.1MB/s), 102KiB/s-15.0MiB/s (104kB/s-15.7MB/s), io=125MiB (131MB), run=2727-3338msec 00:40:50.846 00:40:50.846 Disk stats (read/write): 00:40:50.846 nvme0n1: ios=11932/0, merge=0/0, ticks=2754/0, in_queue=2754, util=93.31% 00:40:50.846 nvme0n2: ios=11811/0, merge=0/0, ticks=3736/0, in_queue=3736, util=97.87% 00:40:50.846 nvme0n3: ios=106/0, merge=0/0, ticks=3187/0, in_queue=3187, util=99.39% 00:40:50.846 nvme0n4: ios=6808/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.41% 00:40:51.103 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:51.103 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:51.360 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:51.360 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:51.622 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:51.622 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:51.880 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:51.880 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:51.880 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:51.880 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1277045 00:40:51.880 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:51.880 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:52.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.137 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:52.138 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:52.138 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:52.138 nvmf hotplug test: fio failed as expected 00:40:52.138 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.396 rmmod nvme_tcp 00:40:52.396 rmmod nvme_fabrics 00:40:52.396 rmmod nvme_keyring 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1274477 ']' 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1274477 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1274477 ']' 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1274477 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1274477 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1274477' 00:40:52.396 killing process with pid 1274477 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1274477 00:40:52.396 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1274477 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:52.656 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:55.195 00:40:55.195 real 0m25.822s 00:40:55.195 user 1m31.413s 00:40:55.195 sys 0m11.259s 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:55.195 ************************************ 00:40:55.195 END TEST nvmf_fio_target 00:40:55.195 ************************************ 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:55.195 ************************************ 00:40:55.195 START TEST nvmf_bdevio 00:40:55.195 ************************************ 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:55.195 * Looking for test storage... 00:40:55.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:55.195 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:55.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.196 --rc genhtml_branch_coverage=1 00:40:55.196 --rc genhtml_function_coverage=1 00:40:55.196 --rc genhtml_legend=1 00:40:55.196 --rc geninfo_all_blocks=1 00:40:55.196 --rc geninfo_unexecuted_blocks=1 00:40:55.196 00:40:55.196 ' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:55.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.196 --rc genhtml_branch_coverage=1 00:40:55.196 --rc genhtml_function_coverage=1 00:40:55.196 --rc genhtml_legend=1 00:40:55.196 --rc geninfo_all_blocks=1 00:40:55.196 --rc geninfo_unexecuted_blocks=1 00:40:55.196 00:40:55.196 ' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:55.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.196 --rc genhtml_branch_coverage=1 00:40:55.196 --rc genhtml_function_coverage=1 00:40:55.196 --rc genhtml_legend=1 00:40:55.196 --rc geninfo_all_blocks=1 00:40:55.196 --rc geninfo_unexecuted_blocks=1 00:40:55.196 00:40:55.196 ' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:55.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.196 --rc genhtml_branch_coverage=1 00:40:55.196 --rc genhtml_function_coverage=1 00:40:55.196 --rc genhtml_legend=1 00:40:55.196 --rc geninfo_all_blocks=1 00:40:55.196 --rc geninfo_unexecuted_blocks=1 00:40:55.196 00:40:55.196 ' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:55.196 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.197 16:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:00.520 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:00.520 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:00.520 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:00.520 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:00.521 Found net devices under 0000:af:00.0: cvl_0_0 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:00.521 Found net devices under 0000:af:00.1: cvl_0_1 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:00.521 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:00.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:00.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:41:00.807 00:41:00.807 --- 10.0.0.2 ping statistics --- 00:41:00.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.807 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:00.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:00.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:41:00.807 00:41:00.807 --- 10.0.0.1 ping statistics --- 00:41:00.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.807 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1281416 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1281416 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1281416 ']' 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:00.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:00.807 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:00.807 [2024-12-16 16:46:49.334248] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:00.807 [2024-12-16 16:46:49.335150] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:00.807 [2024-12-16 16:46:49.335186] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:01.080 [2024-12-16 16:46:49.412785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:01.081 [2024-12-16 16:46:49.435582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:01.081 [2024-12-16 16:46:49.435619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:01.081 [2024-12-16 16:46:49.435627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:01.081 [2024-12-16 16:46:49.435632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:01.081 [2024-12-16 16:46:49.435638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:01.081 [2024-12-16 16:46:49.437001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:01.081 [2024-12-16 16:46:49.437131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:01.081 [2024-12-16 16:46:49.437240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:01.081 [2024-12-16 16:46:49.437240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:01.081 [2024-12-16 16:46:49.498985] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:01.081 [2024-12-16 16:46:49.499842] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:01.081 [2024-12-16 16:46:49.499999] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:01.081 [2024-12-16 16:46:49.500500] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:01.081 [2024-12-16 16:46:49.500539] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.081 [2024-12-16 16:46:49.561926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.081 Malloc0 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.081 [2024-12-16 16:46:49.637947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:01.081 { 00:41:01.081 "params": { 00:41:01.081 "name": "Nvme$subsystem", 00:41:01.081 "trtype": "$TEST_TRANSPORT", 00:41:01.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.081 "adrfam": "ipv4", 00:41:01.081 "trsvcid": "$NVMF_PORT", 00:41:01.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.081 "hdgst": ${hdgst:-false}, 00:41:01.081 "ddgst": ${ddgst:-false} 00:41:01.081 }, 00:41:01.081 "method": "bdev_nvme_attach_controller" 00:41:01.081 } 00:41:01.081 EOF 00:41:01.081 )") 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:01.081 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:01.081 "params": { 00:41:01.081 "name": "Nvme1", 00:41:01.081 "trtype": "tcp", 00:41:01.081 "traddr": "10.0.0.2", 00:41:01.081 "adrfam": "ipv4", 00:41:01.081 "trsvcid": "4420", 00:41:01.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:01.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:01.081 "hdgst": false, 00:41:01.081 "ddgst": false 00:41:01.081 }, 00:41:01.081 "method": "bdev_nvme_attach_controller" 00:41:01.081 }' 00:41:01.340 [2024-12-16 16:46:49.688945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:01.340 [2024-12-16 16:46:49.688989] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281444 ] 00:41:01.340 [2024-12-16 16:46:49.783554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:01.340 [2024-12-16 16:46:49.808511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:01.340 [2024-12-16 16:46:49.808616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.340 [2024-12-16 16:46:49.808617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:01.598 I/O targets: 00:41:01.598 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:01.598 00:41:01.598 00:41:01.598 CUnit - A unit testing framework for C - Version 2.1-3 00:41:01.598 http://cunit.sourceforge.net/ 00:41:01.598 00:41:01.598 00:41:01.598 Suite: bdevio tests on: Nvme1n1 00:41:01.598 Test: blockdev write read block ...passed 00:41:01.598 Test: blockdev write zeroes read block ...passed 00:41:01.598 Test: blockdev write zeroes read no split ...passed 00:41:01.598 Test: blockdev write zeroes read split ...passed 00:41:01.598 Test: blockdev write zeroes read split partial ...passed 00:41:01.598 Test: blockdev reset ...[2024-12-16 16:46:50.144649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:01.598 [2024-12-16 16:46:50.144712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cc340 (9): Bad file descriptor 00:41:01.857 [2024-12-16 16:46:50.237113] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:01.857 passed 00:41:01.857 Test: blockdev write read 8 blocks ...passed 00:41:01.857 Test: blockdev write read size > 128k ...passed 00:41:01.857 Test: blockdev write read invalid size ...passed 00:41:01.857 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:01.857 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:01.857 Test: blockdev write read max offset ...passed 00:41:01.857 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:01.857 Test: blockdev writev readv 8 blocks ...passed 00:41:01.857 Test: blockdev writev readv 30 x 1block ...passed 00:41:01.857 Test: blockdev writev readv block ...passed 00:41:01.857 Test: blockdev writev readv size > 128k ...passed 00:41:01.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:01.857 Test: blockdev comparev and writev ...[2024-12-16 16:46:50.449983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.450032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.450342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.450367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.450662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.450685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.450978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.450993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:01.857 [2024-12-16 16:46:50.451004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:01.857 [2024-12-16 16:46:50.451012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:02.115 passed 00:41:02.115 Test: blockdev nvme passthru rw ...passed 00:41:02.115 Test: blockdev nvme passthru vendor specific ...[2024-12-16 16:46:50.534466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:02.115 [2024-12-16 16:46:50.534483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:02.115 [2024-12-16 16:46:50.534588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:02.115 [2024-12-16 16:46:50.534598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:02.115 [2024-12-16 16:46:50.534706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:02.115 [2024-12-16 16:46:50.534716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:02.115 [2024-12-16 16:46:50.534820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:02.115 [2024-12-16 16:46:50.534831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:02.115 passed 00:41:02.115 Test: blockdev nvme admin passthru ...passed 00:41:02.115 Test: blockdev copy ...passed 00:41:02.115 00:41:02.115 Run Summary: Type Total Ran Passed Failed Inactive 00:41:02.115 suites 1 1 n/a 0 0 00:41:02.115 tests 23 23 23 0 0 00:41:02.115 asserts 152 152 152 0 n/a 00:41:02.115 00:41:02.115 Elapsed time = 1.190 seconds 00:41:02.116 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:02.116 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.116 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:02.374 rmmod nvme_tcp 00:41:02.374 rmmod nvme_fabrics 00:41:02.374 rmmod nvme_keyring 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1281416 ']' 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1281416 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1281416 ']' 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1281416 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281416 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281416' 00:41:02.374 killing process with pid 1281416 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1281416 00:41:02.374 16:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1281416 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:02.634 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.542 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:04.542 00:41:04.542 real 0m9.842s 00:41:04.542 user 0m8.775s 00:41:04.542 sys 0m5.154s 00:41:04.542 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:04.542 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:04.542 ************************************ 00:41:04.542 END TEST nvmf_bdevio 00:41:04.542 ************************************ 00:41:04.542 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:04.542 00:41:04.542 real 4m30.011s 00:41:04.542 user 9m4.152s 00:41:04.542 sys 1m49.517s 00:41:04.542 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:04.542 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:04.542 ************************************ 00:41:04.542 END TEST nvmf_target_core_interrupt_mode 00:41:04.542 ************************************ 00:41:04.802 16:46:53 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:04.802 16:46:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:04.802 16:46:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:04.802 16:46:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.802 ************************************ 00:41:04.802 START TEST nvmf_interrupt 00:41:04.802 ************************************ 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:04.802 * Looking for test storage... 00:41:04.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:04.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.802 --rc genhtml_branch_coverage=1 00:41:04.802 --rc genhtml_function_coverage=1 00:41:04.802 --rc genhtml_legend=1 00:41:04.802 --rc geninfo_all_blocks=1 00:41:04.802 --rc geninfo_unexecuted_blocks=1 00:41:04.802 00:41:04.802 ' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:04.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.802 --rc genhtml_branch_coverage=1 00:41:04.802 --rc genhtml_function_coverage=1 00:41:04.802 --rc genhtml_legend=1 00:41:04.802 --rc geninfo_all_blocks=1 00:41:04.802 --rc geninfo_unexecuted_blocks=1 00:41:04.802 00:41:04.802 ' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:04.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.802 --rc genhtml_branch_coverage=1 00:41:04.802 --rc genhtml_function_coverage=1 00:41:04.802 --rc genhtml_legend=1 00:41:04.802 --rc geninfo_all_blocks=1 00:41:04.802 --rc geninfo_unexecuted_blocks=1 00:41:04.802 00:41:04.802 ' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:04.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.802 --rc genhtml_branch_coverage=1 00:41:04.802 --rc genhtml_function_coverage=1 00:41:04.802 --rc genhtml_legend=1 00:41:04.802 --rc geninfo_all_blocks=1 00:41:04.802 --rc geninfo_unexecuted_blocks=1 00:41:04.802 00:41:04.802 ' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.802 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:05.063 16:46:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:11.636 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:11.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:11.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:11.637 Found net devices under 0000:af:00.0: cvl_0_0 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:11.637 Found net devices under 0000:af:00.1: cvl_0_1 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:11.637 16:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:11.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:11.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:41:11.637 00:41:11.637 --- 10.0.0.2 ping statistics --- 00:41:11.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.637 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:11.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:11.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:41:11.637 00:41:11.637 --- 10.0.0.1 ping statistics --- 00:41:11.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.637 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1285143 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1285143 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1285143 ']' 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:11.637 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.637 [2024-12-16 16:46:59.308348] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:11.637 [2024-12-16 16:46:59.309239] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:11.637 [2024-12-16 16:46:59.309270] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:11.637 [2024-12-16 16:46:59.386504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:11.637 [2024-12-16 16:46:59.408048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:11.637 [2024-12-16 16:46:59.408082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:11.637 [2024-12-16 16:46:59.408089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:11.637 [2024-12-16 16:46:59.408100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:11.637 [2024-12-16 16:46:59.408121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:11.637 [2024-12-16 16:46:59.409201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:11.637 [2024-12-16 16:46:59.409203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.637 [2024-12-16 16:46:59.471015] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:11.637 [2024-12-16 16:46:59.471518] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:11.638 [2024-12-16 16:46:59.471782] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:11.638 5000+0 records in 00:41:11.638 5000+0 records out 00:41:11.638 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179465 s, 571 MB/s 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.638 AIO0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.638 [2024-12-16 16:46:59.593993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:11.638 [2024-12-16 16:46:59.634298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1285143 0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285143 0 idle 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285143 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0' 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285143 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1285143 1 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285143 1 idle 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285147 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1' 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285147 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1 00:41:11.638 16:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1285183 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1285143 0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1285143 0 busy 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285143 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.40 reactor_0' 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285143 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.40 reactor_0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1285143 1 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1285143 1 busy 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.638 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.639 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.639 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:11.639 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285147 root 20 0 128.2g 48384 34560 R 93.3 0.1 0:00.27 reactor_1' 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285147 root 20 0 128.2g 48384 34560 R 93.3 0.1 0:00.27 reactor_1 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.897 16:47:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1285183 00:41:21.871 Initializing NVMe Controllers 00:41:21.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:21.871 Controller IO queue size 256, less than required. 00:41:21.871 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:21.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:21.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:21.871 Initialization complete. Launching workers. 00:41:21.871 ======================================================== 00:41:21.871 Latency(us) 00:41:21.871 Device Information : IOPS MiB/s Average min max 00:41:21.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16989.10 66.36 15076.16 2939.57 30715.38 00:41:21.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16798.10 65.62 15243.60 7960.84 24676.61 00:41:21.871 ======================================================== 00:41:21.871 Total : 33787.19 131.98 15159.40 2939.57 30715.38 00:41:21.871 00:41:21.871 [2024-12-16 16:47:10.136743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102d350 is same with the state(6) to be set 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1285143 0 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285143 0 idle 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285143 root 20 0 128.2g 48384 34560 S 6.7 0.1 0:20.21 reactor_0' 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285143 root 20 0 128.2g 48384 34560 S 6.7 0.1 0:20.21 reactor_0 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1285143 1 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285143 1 idle 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:21.871 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285147 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:10.00 reactor_1' 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285147 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:10.00 reactor_1 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:22.132 16:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:22.391 16:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:22.391 16:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:22.391 16:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:22.391 16:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:22.391 16:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1285143 0 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285143 0 idle 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:24.929 16:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285143 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.48 reactor_0' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285143 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.48 reactor_0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1285143 1 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285143 1 idle 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285143 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285143 -w 256 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285147 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.11 reactor_1' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285147 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.11 reactor_1 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:24.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:24.929 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:24.929 rmmod nvme_tcp 00:41:25.189 rmmod nvme_fabrics 00:41:25.189 rmmod nvme_keyring 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1285143 ']' 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1285143 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1285143 ']' 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1285143 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1285143 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1285143' 00:41:25.189 killing process with pid 1285143 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1285143 00:41:25.189 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1285143 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:25.448 16:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:27.351 16:47:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:27.351 00:41:27.351 real 0m22.728s 00:41:27.351 user 0m39.611s 00:41:27.351 sys 0m8.375s 00:41:27.351 16:47:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:27.351 16:47:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.351 ************************************ 00:41:27.351 END TEST nvmf_interrupt 00:41:27.351 ************************************ 00:41:27.610 00:41:27.610 real 35m20.823s 00:41:27.610 user 86m4.432s 00:41:27.610 sys 10m22.451s 00:41:27.610 16:47:15 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:27.610 16:47:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.610 ************************************ 00:41:27.610 END TEST nvmf_tcp 00:41:27.610 ************************************ 00:41:27.610 16:47:16 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:27.610 16:47:16 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:27.610 16:47:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:27.610 16:47:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:27.610 16:47:16 -- common/autotest_common.sh@10 -- # set +x 00:41:27.610 ************************************ 00:41:27.610 START TEST spdkcli_nvmf_tcp 00:41:27.610 ************************************ 00:41:27.610 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:27.610 * Looking for test storage... 00:41:27.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:27.610 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:27.610 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:27.610 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:27.869 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:27.869 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:27.869 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.870 --rc genhtml_branch_coverage=1 00:41:27.870 --rc genhtml_function_coverage=1 00:41:27.870 --rc genhtml_legend=1 00:41:27.870 --rc geninfo_all_blocks=1 00:41:27.870 --rc geninfo_unexecuted_blocks=1 00:41:27.870 00:41:27.870 ' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.870 --rc genhtml_branch_coverage=1 00:41:27.870 --rc genhtml_function_coverage=1 00:41:27.870 --rc genhtml_legend=1 00:41:27.870 --rc geninfo_all_blocks=1 00:41:27.870 --rc geninfo_unexecuted_blocks=1 00:41:27.870 00:41:27.870 ' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.870 --rc genhtml_branch_coverage=1 00:41:27.870 --rc genhtml_function_coverage=1 00:41:27.870 --rc genhtml_legend=1 00:41:27.870 --rc geninfo_all_blocks=1 00:41:27.870 --rc geninfo_unexecuted_blocks=1 00:41:27.870 00:41:27.870 ' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.870 --rc genhtml_branch_coverage=1 00:41:27.870 --rc genhtml_function_coverage=1 00:41:27.870 --rc genhtml_legend=1 00:41:27.870 --rc geninfo_all_blocks=1 00:41:27.870 --rc geninfo_unexecuted_blocks=1 00:41:27.870 00:41:27.870 ' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:27.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1287913 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1287913 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1287913 ']' 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:27.870 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.870 [2024-12-16 16:47:16.321901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:27.870 [2024-12-16 16:47:16.321949] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287913 ] 00:41:27.870 [2024-12-16 16:47:16.396311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:27.870 [2024-12-16 16:47:16.420219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.870 [2024-12-16 16:47:16.420220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.130 16:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:28.130 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:28.130 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:28.130 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:28.130 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:28.130 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:28.130 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:28.130 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:28.130 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:28.130 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:28.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:28.130 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:28.130 ' 00:41:30.664 [2024-12-16 16:47:19.226487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:32.041 [2024-12-16 16:47:20.566907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:34.574 [2024-12-16 16:47:23.050550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:37.105 [2024-12-16 16:47:25.193231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:38.481 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:38.481 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:38.481 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:38.481 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:38.481 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:38.481 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:38.481 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:38.481 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:38.481 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:38.481 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:38.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:38.481 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:38.481 16:47:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:39.049 16:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:39.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:39.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:39.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:39.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:39.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:39.049 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:39.049 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:39.049 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:39.049 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:39.049 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:39.049 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:39.049 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:39.049 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:39.049 ' 00:41:45.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:45.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:45.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:45.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:45.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:45.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:45.612 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:45.612 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:45.613 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:45.613 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:45.613 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:45.613 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:45.613 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:45.613 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1287913 ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287913' 00:41:45.613 killing process with pid 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1287913 ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1287913 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1287913 ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1287913 00:41:45.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1287913) - No such process 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1287913 is not found' 00:41:45.613 Process with pid 1287913 is not found 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:45.613 00:41:45.613 real 0m17.282s 00:41:45.613 user 0m38.102s 00:41:45.613 sys 0m0.781s 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:45.613 16:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.613 ************************************ 00:41:45.613 END TEST spdkcli_nvmf_tcp 00:41:45.613 ************************************ 00:41:45.613 16:47:33 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:45.613 16:47:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:45.613 16:47:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:45.613 16:47:33 -- common/autotest_common.sh@10 -- # set +x 00:41:45.613 ************************************ 00:41:45.613 START TEST nvmf_identify_passthru 00:41:45.613 ************************************ 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:45.613 * Looking for test storage... 00:41:45.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:45.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.613 --rc genhtml_branch_coverage=1 00:41:45.613 --rc genhtml_function_coverage=1 00:41:45.613 --rc genhtml_legend=1 00:41:45.613 --rc geninfo_all_blocks=1 00:41:45.613 --rc geninfo_unexecuted_blocks=1 00:41:45.613 00:41:45.613 ' 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:45.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.613 --rc genhtml_branch_coverage=1 00:41:45.613 --rc genhtml_function_coverage=1 00:41:45.613 --rc genhtml_legend=1 00:41:45.613 --rc geninfo_all_blocks=1 00:41:45.613 --rc geninfo_unexecuted_blocks=1 00:41:45.613 00:41:45.613 ' 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:45.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.613 --rc genhtml_branch_coverage=1 00:41:45.613 --rc genhtml_function_coverage=1 00:41:45.613 --rc genhtml_legend=1 00:41:45.613 --rc geninfo_all_blocks=1 00:41:45.613 --rc geninfo_unexecuted_blocks=1 00:41:45.613 00:41:45.613 ' 00:41:45.613 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:45.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.613 --rc genhtml_branch_coverage=1 00:41:45.613 --rc genhtml_function_coverage=1 00:41:45.613 --rc genhtml_legend=1 00:41:45.613 --rc geninfo_all_blocks=1 00:41:45.613 --rc geninfo_unexecuted_blocks=1 00:41:45.613 00:41:45.613 ' 00:41:45.613 16:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:45.613 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:45.613 16:47:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:45.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:45.614 16:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:45.614 16:47:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:45.614 16:47:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:45.614 16:47:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:45.614 16:47:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:45.614 16:47:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.614 16:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:45.614 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:45.614 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:45.614 16:47:33 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:45.614 16:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:50.888 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:50.889 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:50.889 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:50.889 Found net devices under 0000:af:00.0: cvl_0_0 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:50.889 Found net devices under 0000:af:00.1: cvl_0_1 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:50.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:50.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:41:50.889 00:41:50.889 --- 10.0.0.2 ping statistics --- 00:41:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.889 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:50.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:50.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:41:50.889 00:41:50.889 --- 10.0.0.1 ping statistics --- 00:41:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.889 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:50.889 16:47:39 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:50.889 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:50.889 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:50.889 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:51.149 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:41:51.149 16:47:39 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:41:51.149 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:51.149 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:51.149 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:51.149 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:51.149 16:47:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:55.342 16:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:41:55.342 16:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:55.342 16:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:55.342 16:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1294943 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:59.532 16:47:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1294943 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1294943 ']' 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:59.532 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:59.533 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:59.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:59.533 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:59.533 16:47:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.533 [2024-12-16 16:47:47.939904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:59.533 [2024-12-16 16:47:47.939949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:59.533 [2024-12-16 16:47:48.000209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:59.533 [2024-12-16 16:47:48.024004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:59.533 [2024-12-16 16:47:48.024044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:59.533 [2024-12-16 16:47:48.024051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:59.533 [2024-12-16 16:47:48.024056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:59.533 [2024-12-16 16:47:48.024061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:59.533 [2024-12-16 16:47:48.025488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:59.533 [2024-12-16 16:47:48.025602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:59.533 [2024-12-16 16:47:48.025708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:59.533 [2024-12-16 16:47:48.025709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:59.533 16:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.533 INFO: Log level set to 20 00:41:59.533 INFO: Requests: 00:41:59.533 { 00:41:59.533 "jsonrpc": "2.0", 00:41:59.533 "method": "nvmf_set_config", 00:41:59.533 "id": 1, 00:41:59.533 "params": { 00:41:59.533 "admin_cmd_passthru": { 00:41:59.533 "identify_ctrlr": true 00:41:59.533 } 00:41:59.533 } 00:41:59.533 } 00:41:59.533 00:41:59.533 INFO: response: 00:41:59.533 { 00:41:59.533 "jsonrpc": "2.0", 00:41:59.533 "id": 1, 00:41:59.533 "result": true 00:41:59.533 } 00:41:59.533 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.533 16:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.533 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.533 INFO: Setting log level to 20 00:41:59.533 INFO: Setting log level to 20 00:41:59.533 INFO: Log level set to 20 00:41:59.533 INFO: Log level set to 20 00:41:59.533 INFO: Requests: 00:41:59.533 { 00:41:59.533 "jsonrpc": "2.0", 00:41:59.533 "method": "framework_start_init", 00:41:59.533 "id": 1 00:41:59.533 } 00:41:59.533 00:41:59.533 INFO: Requests: 00:41:59.533 { 00:41:59.533 "jsonrpc": "2.0", 00:41:59.533 "method": "framework_start_init", 00:41:59.533 "id": 1 00:41:59.533 } 00:41:59.533 00:41:59.792 [2024-12-16 16:47:48.157302] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:59.792 INFO: response: 00:41:59.792 { 00:41:59.792 "jsonrpc": "2.0", 00:41:59.792 "id": 1, 00:41:59.792 "result": true 00:41:59.792 } 00:41:59.792 00:41:59.792 INFO: response: 00:41:59.792 { 00:41:59.792 "jsonrpc": "2.0", 00:41:59.792 "id": 1, 00:41:59.792 "result": true 00:41:59.792 } 00:41:59.792 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.792 16:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.792 INFO: Setting log level to 40 00:41:59.792 INFO: Setting log level to 40 00:41:59.792 INFO: Setting log level to 40 00:41:59.792 [2024-12-16 16:47:48.166553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.792 16:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:59.792 16:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.792 16:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.081 Nvme0n1 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.081 [2024-12-16 16:47:51.062587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.081 [ 00:42:03.081 { 00:42:03.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:03.081 "subtype": "Discovery", 00:42:03.081 "listen_addresses": [], 00:42:03.081 "allow_any_host": true, 00:42:03.081 "hosts": [] 00:42:03.081 }, 00:42:03.081 { 00:42:03.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:03.081 "subtype": "NVMe", 00:42:03.081 "listen_addresses": [ 00:42:03.081 { 00:42:03.081 "trtype": "TCP", 00:42:03.081 "adrfam": "IPv4", 00:42:03.081 "traddr": "10.0.0.2", 00:42:03.081 "trsvcid": "4420" 00:42:03.081 } 00:42:03.081 ], 00:42:03.081 "allow_any_host": true, 00:42:03.081 "hosts": [], 00:42:03.081 "serial_number": "SPDK00000000000001", 00:42:03.081 "model_number": "SPDK bdev Controller", 00:42:03.081 "max_namespaces": 1, 00:42:03.081 "min_cntlid": 1, 00:42:03.081 "max_cntlid": 65519, 00:42:03.081 "namespaces": [ 00:42:03.081 { 00:42:03.081 "nsid": 1, 00:42:03.081 "bdev_name": "Nvme0n1", 00:42:03.081 "name": "Nvme0n1", 00:42:03.081 "nguid": "727784877073486FA78AA62439EF1305", 00:42:03.081 "uuid": "72778487-7073-486f-a78a-a62439ef1305" 00:42:03.081 } 00:42:03.081 ] 00:42:03.081 } 00:42:03.081 ] 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:03.081 16:47:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:03.081 rmmod nvme_tcp 00:42:03.081 rmmod nvme_fabrics 00:42:03.081 rmmod nvme_keyring 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1294943 ']' 00:42:03.081 16:47:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1294943 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1294943 ']' 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1294943 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1294943 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:03.081 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:03.082 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1294943' 00:42:03.082 killing process with pid 1294943 00:42:03.082 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1294943 00:42:03.082 16:47:51 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1294943 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:04.460 16:47:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.460 16:47:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:04.460 16:47:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.365 16:47:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:06.365 00:42:06.365 real 0m21.566s 00:42:06.365 user 0m26.901s 00:42:06.365 sys 0m5.322s 00:42:06.365 16:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.365 16:47:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:06.365 ************************************ 00:42:06.365 END TEST nvmf_identify_passthru 00:42:06.365 ************************************ 00:42:06.628 16:47:55 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:06.628 16:47:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:06.628 16:47:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.628 16:47:55 -- common/autotest_common.sh@10 -- # set +x 00:42:06.628 ************************************ 00:42:06.628 START TEST nvmf_dif 00:42:06.628 ************************************ 00:42:06.628 16:47:55 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:06.628 * Looking for test storage... 00:42:06.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.629 16:47:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:06.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.629 --rc genhtml_branch_coverage=1 00:42:06.629 --rc genhtml_function_coverage=1 00:42:06.629 --rc genhtml_legend=1 00:42:06.629 --rc geninfo_all_blocks=1 00:42:06.629 --rc geninfo_unexecuted_blocks=1 00:42:06.629 00:42:06.629 ' 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:06.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.629 --rc genhtml_branch_coverage=1 00:42:06.629 --rc genhtml_function_coverage=1 00:42:06.629 --rc genhtml_legend=1 00:42:06.629 --rc geninfo_all_blocks=1 00:42:06.629 --rc geninfo_unexecuted_blocks=1 00:42:06.629 00:42:06.629 ' 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:06.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.629 --rc genhtml_branch_coverage=1 00:42:06.629 --rc genhtml_function_coverage=1 00:42:06.629 --rc genhtml_legend=1 00:42:06.629 --rc geninfo_all_blocks=1 00:42:06.629 --rc geninfo_unexecuted_blocks=1 00:42:06.629 00:42:06.629 ' 00:42:06.629 16:47:55 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:06.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.629 --rc genhtml_branch_coverage=1 00:42:06.629 --rc genhtml_function_coverage=1 00:42:06.629 --rc genhtml_legend=1 00:42:06.630 --rc geninfo_all_blocks=1 00:42:06.630 --rc geninfo_unexecuted_blocks=1 00:42:06.630 00:42:06.630 ' 00:42:06.630 16:47:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.630 16:47:55 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:06.630 16:47:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.630 16:47:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.630 16:47:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.630 16:47:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.630 16:47:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.893 16:47:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.893 16:47:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.893 16:47:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:06.893 16:47:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:06.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:06.893 16:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:06.893 16:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:06.893 16:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:06.893 16:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:06.893 16:47:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.893 16:47:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:06.893 16:47:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:06.893 16:47:55 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:06.893 16:47:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:12.333 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:12.333 16:48:00 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:12.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:12.334 Found net devices under 0000:af:00.0: cvl_0_0 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:12.334 Found net devices under 0000:af:00.1: cvl_0_1 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:12.334 16:48:00 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:12.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:12.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:42:12.594 00:42:12.594 --- 10.0.0.2 ping statistics --- 00:42:12.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:12.594 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:12.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:12.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:42:12.594 00:42:12.594 --- 10.0.0.1 ping statistics --- 00:42:12.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:12.594 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:12.594 16:48:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:15.131 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:15.131 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:15.131 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:15.391 16:48:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:15.391 16:48:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1300465 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1300465 00:42:15.391 16:48:03 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1300465 ']' 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.391 16:48:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:15.391 [2024-12-16 16:48:03.957495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:15.391 [2024-12-16 16:48:03.957546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:15.649 [2024-12-16 16:48:04.037210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.649 [2024-12-16 16:48:04.059624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:15.649 [2024-12-16 16:48:04.059658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:15.649 [2024-12-16 16:48:04.059665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:15.650 [2024-12-16 16:48:04.059672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:15.650 [2024-12-16 16:48:04.059677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:15.650 [2024-12-16 16:48:04.060190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:15.650 16:48:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:15.650 16:48:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:15.650 16:48:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:15.650 16:48:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:15.650 [2024-12-16 16:48:04.191416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.650 16:48:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:15.650 16:48:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:15.650 ************************************ 00:42:15.650 START TEST fio_dif_1_default 00:42:15.650 ************************************ 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.650 bdev_null0 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:15.650 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:15.908 [2024-12-16 16:48:04.267737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:15.908 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:15.909 { 00:42:15.909 "params": { 00:42:15.909 "name": "Nvme$subsystem", 00:42:15.909 "trtype": "$TEST_TRANSPORT", 00:42:15.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.909 "adrfam": "ipv4", 00:42:15.909 "trsvcid": "$NVMF_PORT", 00:42:15.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.909 "hdgst": ${hdgst:-false}, 00:42:15.909 "ddgst": ${ddgst:-false} 00:42:15.909 }, 00:42:15.909 "method": "bdev_nvme_attach_controller" 00:42:15.909 } 00:42:15.909 EOF 00:42:15.909 )") 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:15.909 "params": { 00:42:15.909 "name": "Nvme0", 00:42:15.909 "trtype": "tcp", 00:42:15.909 "traddr": "10.0.0.2", 00:42:15.909 "adrfam": "ipv4", 00:42:15.909 "trsvcid": "4420", 00:42:15.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:15.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:15.909 "hdgst": false, 00:42:15.909 "ddgst": false 00:42:15.909 }, 00:42:15.909 "method": "bdev_nvme_attach_controller" 00:42:15.909 }' 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:15.909 16:48:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.167 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:16.167 fio-3.35 00:42:16.167 Starting 1 thread 00:42:28.375 00:42:28.375 filename0: (groupid=0, jobs=1): err= 0: pid=1300760: Mon Dec 16 16:48:15 2024 00:42:28.375 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10019msec) 00:42:28.375 slat (nsec): min=6078, max=24432, avg=6369.28, stdev=765.96 00:42:28.375 clat (usec): min=40839, max=46348, avg=41380.48, stdev=574.09 00:42:28.375 lat (usec): min=40845, max=46373, avg=41386.85, stdev=574.32 00:42:28.375 clat percentiles (usec): 00:42:28.375 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:28.375 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:28.375 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:28.375 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:42:28.375 | 99.99th=[46400] 00:42:28.376 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=385.60, stdev= 7.16, samples=20 00:42:28.376 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:42:28.376 lat (msec) : 50=100.00% 00:42:28.376 cpu : usr=92.14%, sys=7.60%, ctx=14, majf=0, minf=0 00:42:28.376 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:28.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.376 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.376 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:28.376 00:42:28.376 Run status group 0 (all jobs): 00:42:28.376 READ: bw=386KiB/s (396kB/s), 386KiB/s-386KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10019-10019msec 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 00:42:28.376 real 0m11.207s 00:42:28.376 user 0m16.230s 00:42:28.376 sys 0m1.115s 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 ************************************ 00:42:28.376 END TEST fio_dif_1_default 00:42:28.376 ************************************ 00:42:28.376 16:48:15 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:28.376 16:48:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:28.376 16:48:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 ************************************ 00:42:28.376 START TEST fio_dif_1_multi_subsystems 00:42:28.376 ************************************ 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 bdev_null0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 [2024-12-16 16:48:15.548619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 bdev_null1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:28.376 { 00:42:28.376 "params": { 00:42:28.376 "name": "Nvme$subsystem", 00:42:28.376 "trtype": "$TEST_TRANSPORT", 00:42:28.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:28.376 "adrfam": "ipv4", 00:42:28.376 "trsvcid": "$NVMF_PORT", 00:42:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:28.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:28.376 "hdgst": ${hdgst:-false}, 00:42:28.376 "ddgst": ${ddgst:-false} 00:42:28.376 }, 00:42:28.376 "method": "bdev_nvme_attach_controller" 00:42:28.376 } 00:42:28.376 EOF 00:42:28.376 )") 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:28.376 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:28.376 { 00:42:28.376 "params": { 00:42:28.376 "name": "Nvme$subsystem", 00:42:28.376 "trtype": "$TEST_TRANSPORT", 00:42:28.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:28.376 "adrfam": "ipv4", 00:42:28.376 "trsvcid": "$NVMF_PORT", 00:42:28.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:28.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:28.377 "hdgst": ${hdgst:-false}, 00:42:28.377 "ddgst": ${ddgst:-false} 00:42:28.377 }, 00:42:28.377 "method": "bdev_nvme_attach_controller" 00:42:28.377 } 00:42:28.377 EOF 00:42:28.377 )") 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:28.377 "params": { 00:42:28.377 "name": "Nvme0", 00:42:28.377 "trtype": "tcp", 00:42:28.377 "traddr": "10.0.0.2", 00:42:28.377 "adrfam": "ipv4", 00:42:28.377 "trsvcid": "4420", 00:42:28.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:28.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:28.377 "hdgst": false, 00:42:28.377 "ddgst": false 00:42:28.377 }, 00:42:28.377 "method": "bdev_nvme_attach_controller" 00:42:28.377 },{ 00:42:28.377 "params": { 00:42:28.377 "name": "Nvme1", 00:42:28.377 "trtype": "tcp", 00:42:28.377 "traddr": "10.0.0.2", 00:42:28.377 "adrfam": "ipv4", 00:42:28.377 "trsvcid": "4420", 00:42:28.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:28.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:28.377 "hdgst": false, 00:42:28.377 "ddgst": false 00:42:28.377 }, 00:42:28.377 "method": "bdev_nvme_attach_controller" 00:42:28.377 }' 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:28.377 16:48:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:28.377 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:28.377 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:28.377 fio-3.35 00:42:28.377 Starting 2 threads 00:42:38.355 00:42:38.355 filename0: (groupid=0, jobs=1): err= 0: pid=1303063: Mon Dec 16 16:48:26 2024 00:42:38.355 read: IOPS=192, BW=770KiB/s (789kB/s)(7728KiB/10033msec) 00:42:38.355 slat (nsec): min=6012, max=76656, avg=8327.87, stdev=4149.45 00:42:38.355 clat (usec): min=388, max=42593, avg=20746.12, stdev=20541.61 00:42:38.355 lat (usec): min=394, max=42602, avg=20754.44, stdev=20540.47 00:42:38.355 clat percentiles (usec): 00:42:38.355 | 1.00th=[ 404], 5.00th=[ 429], 10.00th=[ 474], 20.00th=[ 490], 00:42:38.355 | 30.00th=[ 498], 40.00th=[ 515], 50.00th=[ 693], 60.00th=[41157], 00:42:38.355 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:42:38.355 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:38.355 | 99.99th=[42730] 00:42:38.355 bw ( KiB/s): min= 704, max= 896, per=66.50%, avg=771.20, stdev=38.71, samples=20 00:42:38.355 iops : min= 176, max= 224, avg=192.80, stdev= 9.68, samples=20 00:42:38.355 lat (usec) : 500=32.71%, 750=17.70%, 1000=0.10% 00:42:38.355 lat (msec) : 2=0.21%, 50=49.28% 00:42:38.355 cpu : usr=98.03%, sys=1.69%, ctx=17, majf=0, minf=72 00:42:38.355 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:38.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.355 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.356 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:38.356 filename1: (groupid=0, jobs=1): err= 0: pid=1303064: Mon Dec 16 16:48:26 2024 00:42:38.356 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10002msec) 00:42:38.356 slat (nsec): min=6014, max=41685, avg=11047.70, stdev=8237.67 00:42:38.356 clat (usec): min=411, max=42522, avg=40953.22, stdev=4550.11 00:42:38.356 lat (usec): min=417, max=42530, avg=40964.26, stdev=4550.35 00:42:38.356 clat percentiles (usec): 00:42:38.356 | 1.00th=[ 445], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:38.356 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:42:38.356 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:38.356 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:42:38.356 | 99.99th=[42730] 00:42:38.356 bw ( KiB/s): min= 384, max= 416, per=33.64%, avg=390.74, stdev=13.40, samples=19 00:42:38.356 iops : min= 96, max= 104, avg=97.68, stdev= 3.35, samples=19 00:42:38.356 lat (usec) : 500=1.23% 00:42:38.356 lat (msec) : 50=98.77% 00:42:38.356 cpu : usr=98.13%, sys=1.60%, ctx=13, majf=0, minf=117 00:42:38.356 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:38.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.356 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.356 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:38.356 00:42:38.356 Run status group 0 (all jobs): 00:42:38.356 READ: bw=1159KiB/s (1187kB/s), 390KiB/s-770KiB/s (400kB/s-789kB/s), io=11.4MiB (11.9MB), run=10002-10033msec 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 00:42:38.356 real 0m11.312s 00:42:38.356 user 0m26.049s 00:42:38.356 sys 0m0.645s 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 ************************************ 00:42:38.356 END TEST fio_dif_1_multi_subsystems 00:42:38.356 ************************************ 00:42:38.356 16:48:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:38.356 16:48:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:38.356 16:48:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 ************************************ 00:42:38.356 START TEST fio_dif_rand_params 00:42:38.356 ************************************ 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 bdev_null0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.356 [2024-12-16 16:48:26.932989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:38.356 { 00:42:38.356 "params": { 00:42:38.356 "name": "Nvme$subsystem", 00:42:38.356 "trtype": "$TEST_TRANSPORT", 00:42:38.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:38.356 "adrfam": "ipv4", 00:42:38.356 "trsvcid": "$NVMF_PORT", 00:42:38.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:38.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:38.356 "hdgst": ${hdgst:-false}, 00:42:38.356 "ddgst": ${ddgst:-false} 00:42:38.356 }, 00:42:38.356 "method": "bdev_nvme_attach_controller" 00:42:38.356 } 00:42:38.356 EOF 00:42:38.356 )") 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:38.356 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:38.357 16:48:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:38.357 "params": { 00:42:38.357 "name": "Nvme0", 00:42:38.357 "trtype": "tcp", 00:42:38.357 "traddr": "10.0.0.2", 00:42:38.357 "adrfam": "ipv4", 00:42:38.357 "trsvcid": "4420", 00:42:38.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:38.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:38.357 "hdgst": false, 00:42:38.357 "ddgst": false 00:42:38.357 }, 00:42:38.357 "method": "bdev_nvme_attach_controller" 00:42:38.357 }' 00:42:38.634 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:38.634 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:38.634 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:38.634 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:38.634 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:38.634 16:48:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:38.634 16:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:38.634 16:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:38.634 16:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:38.634 16:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:38.894 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:38.894 ... 00:42:38.894 fio-3.35 00:42:38.894 Starting 3 threads 00:42:44.244 00:42:44.244 filename0: (groupid=0, jobs=1): err= 0: pid=1304977: Mon Dec 16 16:48:32 2024 00:42:44.244 read: IOPS=337, BW=42.2MiB/s (44.2MB/s)(213MiB/5046msec) 00:42:44.244 slat (nsec): min=6410, max=53628, avg=17488.85, stdev=8335.18 00:42:44.244 clat (usec): min=5011, max=52421, avg=8843.88, stdev=4405.96 00:42:44.244 lat (usec): min=5025, max=52445, avg=8861.37, stdev=4405.99 00:42:44.244 clat percentiles (usec): 00:42:44.244 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7570], 00:42:44.244 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:42:44.244 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:42:44.244 | 99.00th=[45351], 99.50th=[47973], 99.90th=[50594], 99.95th=[52167], 00:42:44.244 | 99.99th=[52167] 00:42:44.244 bw ( KiB/s): min=37888, max=46848, per=36.32%, avg=43545.60, stdev=2755.75, samples=10 00:42:44.244 iops : min= 296, max= 366, avg=340.20, stdev=21.53, samples=10 00:42:44.244 lat (msec) : 10=93.31%, 20=5.52%, 50=1.06%, 100=0.12% 00:42:44.244 cpu : usr=96.17%, sys=3.51%, ctx=18, majf=0, minf=62 00:42:44.244 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.244 issued rwts: total=1703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.244 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:44.244 filename0: (groupid=0, jobs=1): err= 0: pid=1304978: Mon Dec 16 16:48:32 2024 00:42:44.244 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(190MiB/5044msec) 00:42:44.244 slat (nsec): min=6381, max=53542, avg=16545.49, stdev=7107.27 00:42:44.244 clat (usec): min=3452, max=50510, avg=9921.48, stdev=3694.53 00:42:44.244 lat (usec): min=3462, max=50522, avg=9938.03, stdev=3695.07 00:42:44.244 clat percentiles (usec): 00:42:44.244 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 8356], 00:42:44.244 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:42:44.244 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11731], 95.00th=[12387], 00:42:44.244 | 99.00th=[14091], 99.50th=[47449], 99.90th=[49546], 99.95th=[50594], 00:42:44.244 | 99.99th=[50594] 00:42:44.244 bw ( KiB/s): min=33792, max=41984, per=32.37%, avg=38809.60, stdev=2789.24, samples=10 00:42:44.244 iops : min= 264, max= 328, avg=303.20, stdev=21.79, samples=10 00:42:44.244 lat (msec) : 4=0.53%, 10=51.71%, 20=47.04%, 50=0.66%, 100=0.07% 00:42:44.244 cpu : usr=96.19%, sys=3.49%, ctx=7, majf=0, minf=81 00:42:44.244 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.244 issued rwts: total=1518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.244 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:44.244 filename0: (groupid=0, jobs=1): err= 0: pid=1304979: Mon Dec 16 16:48:32 2024 00:42:44.244 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(188MiB/5003msec) 00:42:44.244 slat (nsec): min=6321, max=38700, avg=17420.72, stdev=6553.72 00:42:44.244 clat (usec): min=4850, max=52587, avg=9952.06, stdev=5493.04 00:42:44.244 lat (usec): min=4863, max=52598, avg=9969.48, stdev=5493.01 00:42:44.244 clat percentiles (usec): 00:42:44.244 | 1.00th=[ 5735], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8225], 00:42:44.244 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:42:44.244 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:42:44.244 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[52691], 00:42:44.244 | 99.99th=[52691] 00:42:44.244 bw ( KiB/s): min=34048, max=41984, per=32.10%, avg=38485.33, stdev=3042.53, samples=9 00:42:44.244 iops : min= 266, max= 328, avg=300.67, stdev=23.77, samples=9 00:42:44.244 lat (msec) : 10=70.37%, 20=27.84%, 50=0.93%, 100=0.86% 00:42:44.244 cpu : usr=96.98%, sys=2.72%, ctx=7, majf=0, minf=44 00:42:44.244 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.244 issued rwts: total=1505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.244 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:44.244 00:42:44.244 Run status group 0 (all jobs): 00:42:44.244 READ: bw=117MiB/s (123MB/s), 37.6MiB/s-42.2MiB/s (39.4MB/s-44.2MB/s), io=591MiB (619MB), run=5003-5046msec 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 bdev_null0 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 [2024-12-16 16:48:33.048410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 bdev_null1 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.504 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.505 bdev_null2 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.505 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.764 { 00:42:44.764 "params": { 00:42:44.764 "name": "Nvme$subsystem", 00:42:44.764 "trtype": "$TEST_TRANSPORT", 00:42:44.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.764 "adrfam": "ipv4", 00:42:44.764 "trsvcid": "$NVMF_PORT", 00:42:44.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.764 "hdgst": ${hdgst:-false}, 00:42:44.764 "ddgst": ${ddgst:-false} 00:42:44.764 }, 00:42:44.764 "method": "bdev_nvme_attach_controller" 00:42:44.764 } 00:42:44.764 EOF 00:42:44.764 )") 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.764 { 00:42:44.764 "params": { 00:42:44.764 "name": "Nvme$subsystem", 00:42:44.764 "trtype": "$TEST_TRANSPORT", 00:42:44.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.764 "adrfam": "ipv4", 00:42:44.764 "trsvcid": "$NVMF_PORT", 00:42:44.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.764 "hdgst": ${hdgst:-false}, 00:42:44.764 "ddgst": ${ddgst:-false} 00:42:44.764 }, 00:42:44.764 "method": "bdev_nvme_attach_controller" 00:42:44.764 } 00:42:44.764 EOF 00:42:44.764 )") 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.764 { 00:42:44.764 "params": { 00:42:44.764 "name": "Nvme$subsystem", 00:42:44.764 "trtype": "$TEST_TRANSPORT", 00:42:44.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.764 "adrfam": "ipv4", 00:42:44.764 "trsvcid": "$NVMF_PORT", 00:42:44.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.764 "hdgst": ${hdgst:-false}, 00:42:44.764 "ddgst": ${ddgst:-false} 00:42:44.764 }, 00:42:44.764 "method": "bdev_nvme_attach_controller" 00:42:44.764 } 00:42:44.764 EOF 00:42:44.764 )") 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:44.764 16:48:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:44.764 "params": { 00:42:44.764 "name": "Nvme0", 00:42:44.764 "trtype": "tcp", 00:42:44.764 "traddr": "10.0.0.2", 00:42:44.764 "adrfam": "ipv4", 00:42:44.764 "trsvcid": "4420", 00:42:44.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.765 "hdgst": false, 00:42:44.765 "ddgst": false 00:42:44.765 }, 00:42:44.765 "method": "bdev_nvme_attach_controller" 00:42:44.765 },{ 00:42:44.765 "params": { 00:42:44.765 "name": "Nvme1", 00:42:44.765 "trtype": "tcp", 00:42:44.765 "traddr": "10.0.0.2", 00:42:44.765 "adrfam": "ipv4", 00:42:44.765 "trsvcid": "4420", 00:42:44.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:44.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:44.765 "hdgst": false, 00:42:44.765 "ddgst": false 00:42:44.765 }, 00:42:44.765 "method": "bdev_nvme_attach_controller" 00:42:44.765 },{ 00:42:44.765 "params": { 00:42:44.765 "name": "Nvme2", 00:42:44.765 "trtype": "tcp", 00:42:44.765 "traddr": "10.0.0.2", 00:42:44.765 "adrfam": "ipv4", 00:42:44.765 "trsvcid": "4420", 00:42:44.765 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:44.765 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:44.765 "hdgst": false, 00:42:44.765 "ddgst": false 00:42:44.765 }, 00:42:44.765 "method": "bdev_nvme_attach_controller" 00:42:44.765 }' 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:44.765 16:48:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.024 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:45.024 ... 00:42:45.024 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:45.024 ... 00:42:45.024 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:45.024 ... 00:42:45.024 fio-3.35 00:42:45.024 Starting 24 threads 00:42:57.232 00:42:57.232 filename0: (groupid=0, jobs=1): err= 0: pid=1306115: Mon Dec 16 16:48:44 2024 00:42:57.232 read: IOPS=582, BW=2330KiB/s (2386kB/s)(22.8MiB/10024msec) 00:42:57.232 slat (usec): min=7, max=101, avg=41.30, stdev=20.09 00:42:57.232 clat (usec): min=9984, max=36954, avg=27159.03, stdev=2369.54 00:42:57.232 lat (usec): min=10014, max=37023, avg=27200.33, stdev=2368.27 00:42:57.232 clat percentiles (usec): 00:42:57.232 | 1.00th=[18744], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.232 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.232 | 70.00th=[27919], 80.00th=[29230], 90.00th=[30540], 95.00th=[30802], 00:42:57.232 | 99.00th=[31589], 99.50th=[32113], 99.90th=[33162], 99.95th=[35914], 00:42:57.232 | 99.99th=[36963] 00:42:57.232 bw ( KiB/s): min= 2048, max= 2688, per=4.17%, avg=2328.80, stdev=163.47, samples=20 00:42:57.232 iops : min= 512, max= 672, avg=582.10, stdev=40.82, samples=20 00:42:57.232 lat (msec) : 10=0.02%, 20=1.08%, 50=98.90% 00:42:57.232 cpu : usr=98.64%, sys=0.92%, ctx=58, majf=0, minf=9 00:42:57.232 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:42:57.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.232 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.232 filename0: (groupid=0, jobs=1): err= 0: pid=1306117: Mon Dec 16 16:48:44 2024 00:42:57.232 read: IOPS=595, BW=2382KiB/s (2440kB/s)(23.3MiB/10020msec) 00:42:57.232 slat (usec): min=6, max=126, avg=28.34, stdev=21.71 00:42:57.232 clat (usec): min=2164, max=32785, avg=26633.91, stdev=4345.23 00:42:57.232 lat (usec): min=2174, max=32807, avg=26662.25, stdev=4344.41 00:42:57.232 clat percentiles (usec): 00:42:57.232 | 1.00th=[ 2868], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.232 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.232 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30540], 95.00th=[30802], 00:42:57.232 | 99.00th=[31851], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:42:57.232 | 99.99th=[32900] 00:42:57.232 bw ( KiB/s): min= 2048, max= 3712, per=4.26%, avg=2379.75, stdev=348.37, samples=20 00:42:57.232 iops : min= 512, max= 928, avg=594.80, stdev=87.09, samples=20 00:42:57.232 lat (msec) : 4=2.03%, 10=0.39%, 20=1.07%, 50=96.51% 00:42:57.232 cpu : usr=98.96%, sys=0.63%, ctx=36, majf=0, minf=9 00:42:57.232 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:57.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.232 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.232 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.232 filename0: (groupid=0, jobs=1): err= 0: pid=1306118: Mon Dec 16 16:48:44 2024 00:42:57.232 read: IOPS=582, BW=2332KiB/s (2388kB/s)(22.8MiB/10018msec) 00:42:57.232 slat (nsec): min=7804, max=61699, avg=23718.67, stdev=11552.13 00:42:57.232 clat (usec): min=9596, max=32326, avg=27256.18, stdev=2382.73 00:42:57.232 lat (usec): min=9612, max=32343, avg=27279.90, stdev=2381.90 00:42:57.232 clat percentiles (usec): 00:42:57.232 | 1.00th=[16712], 5.00th=[25035], 10.00th=[25035], 20.00th=[25822], 00:42:57.232 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.232 | 70.00th=[27919], 80.00th=[29492], 90.00th=[30540], 95.00th=[30802], 00:42:57.232 | 99.00th=[31851], 99.50th=[31851], 99.90th=[32113], 99.95th=[32375], 00:42:57.232 | 99.99th=[32375] 00:42:57.232 bw ( KiB/s): min= 2048, max= 2688, per=4.17%, avg=2328.80, stdev=152.55, samples=20 00:42:57.232 iops : min= 512, max= 672, avg=582.10, stdev=38.08, samples=20 00:42:57.232 lat (msec) : 10=0.03%, 20=1.06%, 50=98.90% 00:42:57.233 cpu : usr=98.65%, sys=0.99%, ctx=12, majf=0, minf=9 00:42:57.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename0: (groupid=0, jobs=1): err= 0: pid=1306119: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=596, BW=2385KiB/s (2443kB/s)(23.3MiB/10019msec) 00:42:57.233 slat (nsec): min=7483, max=69238, avg=18888.10, stdev=10877.70 00:42:57.233 clat (usec): min=1060, max=33825, avg=26683.95, stdev=4512.12 00:42:57.233 lat (usec): min=1078, max=33844, avg=26702.83, stdev=4511.80 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[ 2704], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:42:57.233 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.233 | 70.00th=[27919], 80.00th=[29492], 90.00th=[30540], 95.00th=[31065], 00:42:57.233 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:42:57.233 | 99.99th=[33817] 00:42:57.233 bw ( KiB/s): min= 2048, max= 3775, per=4.27%, avg=2382.90, stdev=353.96, samples=20 00:42:57.233 iops : min= 512, max= 943, avg=595.55, stdev=88.35, samples=20 00:42:57.233 lat (msec) : 2=0.12%, 4=1.96%, 10=0.49%, 20=1.07%, 50=96.37% 00:42:57.233 cpu : usr=97.54%, sys=1.62%, ctx=160, majf=0, minf=9 00:42:57.233 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename0: (groupid=0, jobs=1): err= 0: pid=1306120: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=580, BW=2322KiB/s (2378kB/s)(22.7MiB/10005msec) 00:42:57.233 slat (nsec): min=6650, max=67484, avg=26051.13, stdev=12517.10 00:42:57.233 clat (usec): min=9440, max=43761, avg=27367.22, stdev=2171.41 00:42:57.233 lat (usec): min=9453, max=43809, avg=27393.27, stdev=2170.76 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[24511], 5.00th=[25035], 10.00th=[25035], 20.00th=[25560], 00:42:57.233 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.233 | 70.00th=[27919], 80.00th=[29230], 90.00th=[30540], 95.00th=[31065], 00:42:57.233 | 99.00th=[31851], 99.50th=[32375], 99.90th=[43254], 99.95th=[43779], 00:42:57.233 | 99.99th=[43779] 00:42:57.233 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2322.37, stdev=129.18, samples=19 00:42:57.233 iops : min= 542, max= 640, avg=580.32, stdev=32.31, samples=19 00:42:57.233 lat (msec) : 10=0.03%, 20=0.41%, 50=99.55% 00:42:57.233 cpu : usr=98.29%, sys=1.18%, ctx=46, majf=0, minf=9 00:42:57.233 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename0: (groupid=0, jobs=1): err= 0: pid=1306121: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=579, BW=2319KiB/s (2375kB/s)(22.7MiB/10016msec) 00:42:57.233 slat (usec): min=5, max=138, avg=44.52, stdev=27.47 00:42:57.233 clat (usec): min=16161, max=34204, avg=27162.41, stdev=1985.64 00:42:57.233 lat (usec): min=16177, max=34220, avg=27206.93, stdev=1988.27 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.233 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.233 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:42:57.233 | 99.00th=[31589], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:42:57.233 | 99.99th=[34341] 00:42:57.233 bw ( KiB/s): min= 2048, max= 2560, per=4.16%, avg=2322.63, stdev=142.95, samples=19 00:42:57.233 iops : min= 512, max= 640, avg=580.42, stdev=35.71, samples=19 00:42:57.233 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.233 cpu : usr=99.02%, sys=0.60%, ctx=13, majf=0, minf=9 00:42:57.233 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename0: (groupid=0, jobs=1): err= 0: pid=1306122: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=582, BW=2330KiB/s (2386kB/s)(22.8MiB/10024msec) 00:42:57.233 slat (usec): min=7, max=132, avg=44.62, stdev=25.49 00:42:57.233 clat (usec): min=9176, max=32619, avg=27111.28, stdev=2299.31 00:42:57.233 lat (usec): min=9188, max=32644, avg=27155.90, stdev=2298.74 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[18482], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.233 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.233 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30540], 95.00th=[30802], 00:42:57.233 | 99.00th=[31327], 99.50th=[32113], 99.90th=[32375], 99.95th=[32637], 00:42:57.233 | 99.99th=[32637] 00:42:57.233 bw ( KiB/s): min= 2048, max= 2688, per=4.17%, avg=2328.80, stdev=163.47, samples=20 00:42:57.233 iops : min= 512, max= 672, avg=582.10, stdev=40.82, samples=20 00:42:57.233 lat (msec) : 10=0.03%, 20=1.03%, 50=98.94% 00:42:57.233 cpu : usr=98.72%, sys=0.89%, ctx=17, majf=0, minf=9 00:42:57.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename0: (groupid=0, jobs=1): err= 0: pid=1306123: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=579, BW=2316KiB/s (2372kB/s)(22.6MiB/10002msec) 00:42:57.233 slat (nsec): min=4496, max=99651, avg=54467.93, stdev=15674.49 00:42:57.233 clat (usec): min=16325, max=46664, avg=27170.93, stdev=2221.61 00:42:57.233 lat (usec): min=16381, max=46680, avg=27225.40, stdev=2220.08 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.233 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.233 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:42:57.233 | 99.00th=[31589], 99.50th=[32113], 99.90th=[46400], 99.95th=[46400], 00:42:57.233 | 99.99th=[46924] 00:42:57.233 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2316.11, stdev=119.36, samples=19 00:42:57.233 iops : min= 542, max= 640, avg=578.84, stdev=29.81, samples=19 00:42:57.233 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.233 cpu : usr=98.77%, sys=0.87%, ctx=15, majf=0, minf=9 00:42:57.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename1: (groupid=0, jobs=1): err= 0: pid=1306124: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=586, BW=2345KiB/s (2401kB/s)(22.9MiB/10002msec) 00:42:57.233 slat (usec): min=6, max=103, avg=31.52, stdev=22.73 00:42:57.233 clat (usec): min=8696, max=46595, avg=27068.64, stdev=3624.21 00:42:57.233 lat (usec): min=8706, max=46610, avg=27100.16, stdev=3621.95 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[17433], 5.00th=[20579], 10.00th=[24249], 20.00th=[24773], 00:42:57.233 | 30.00th=[25297], 40.00th=[26346], 50.00th=[26870], 60.00th=[27132], 00:42:57.233 | 70.00th=[28443], 80.00th=[29492], 90.00th=[30802], 95.00th=[31589], 00:42:57.233 | 99.00th=[39060], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:42:57.233 | 99.99th=[46400] 00:42:57.233 bw ( KiB/s): min= 2171, max= 2560, per=4.20%, avg=2343.89, stdev=109.06, samples=19 00:42:57.233 iops : min= 542, max= 640, avg=585.79, stdev=27.25, samples=19 00:42:57.233 lat (msec) : 10=0.07%, 20=4.67%, 50=95.26% 00:42:57.233 cpu : usr=98.74%, sys=0.91%, ctx=13, majf=0, minf=9 00:42:57.233 IO depths : 1=2.0%, 2=4.3%, 4=10.8%, 8=69.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=90.9%, 8=6.0%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename1: (groupid=0, jobs=1): err= 0: pid=1306125: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=579, BW=2317KiB/s (2373kB/s)(22.6MiB/10002msec) 00:42:57.233 slat (usec): min=6, max=105, avg=53.68, stdev=16.75 00:42:57.233 clat (usec): min=10192, max=47819, avg=27143.35, stdev=2613.44 00:42:57.233 lat (usec): min=10226, max=47845, avg=27197.03, stdev=2612.84 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.233 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.233 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:42:57.233 | 99.00th=[32375], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:42:57.233 | 99.99th=[47973] 00:42:57.233 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2316.95, stdev=120.03, samples=19 00:42:57.233 iops : min= 542, max= 640, avg=579.05, stdev=29.98, samples=19 00:42:57.233 lat (msec) : 20=0.83%, 50=99.17% 00:42:57.233 cpu : usr=97.50%, sys=1.55%, ctx=171, majf=0, minf=9 00:42:57.233 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:42:57.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.233 issued rwts: total=5794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.233 filename1: (groupid=0, jobs=1): err= 0: pid=1306126: Mon Dec 16 16:48:44 2024 00:42:57.233 read: IOPS=580, BW=2320KiB/s (2376kB/s)(22.7MiB/10013msec) 00:42:57.233 slat (usec): min=6, max=137, avg=57.97, stdev=22.08 00:42:57.233 clat (usec): min=14390, max=32643, avg=27059.87, stdev=1969.94 00:42:57.233 lat (usec): min=14423, max=32710, avg=27117.84, stdev=1972.75 00:42:57.233 clat percentiles (usec): 00:42:57.233 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:42:57.233 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.233 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30278], 95.00th=[30540], 00:42:57.233 | 99.00th=[31589], 99.50th=[32113], 99.90th=[32375], 99.95th=[32637], 00:42:57.233 | 99.99th=[32637] 00:42:57.233 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2323.63, stdev=129.97, samples=19 00:42:57.234 iops : min= 542, max= 640, avg=580.79, stdev=32.52, samples=19 00:42:57.234 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.234 cpu : usr=98.86%, sys=0.74%, ctx=15, majf=0, minf=9 00:42:57.234 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename1: (groupid=0, jobs=1): err= 0: pid=1306128: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=582, BW=2330KiB/s (2386kB/s)(22.8MiB/10024msec) 00:42:57.234 slat (usec): min=8, max=133, avg=60.34, stdev=19.61 00:42:57.234 clat (usec): min=10023, max=32627, avg=26959.18, stdev=2297.56 00:42:57.234 lat (usec): min=10096, max=32708, avg=27019.51, stdev=2299.69 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[18482], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30278], 95.00th=[30540], 00:42:57.234 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32375], 99.95th=[32637], 00:42:57.234 | 99.99th=[32637] 00:42:57.234 bw ( KiB/s): min= 2048, max= 2688, per=4.17%, avg=2328.80, stdev=163.47, samples=20 00:42:57.234 iops : min= 512, max= 672, avg=582.10, stdev=40.82, samples=20 00:42:57.234 lat (msec) : 20=1.10%, 50=98.90% 00:42:57.234 cpu : usr=99.12%, sys=0.49%, ctx=14, majf=0, minf=9 00:42:57.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename1: (groupid=0, jobs=1): err= 0: pid=1306129: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=578, BW=2316KiB/s (2371kB/s)(22.6MiB/10004msec) 00:42:57.234 slat (usec): min=4, max=103, avg=52.68, stdev=17.21 00:42:57.234 clat (usec): min=16270, max=48299, avg=27194.10, stdev=2214.77 00:42:57.234 lat (usec): min=16328, max=48313, avg=27246.78, stdev=2214.67 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:42:57.234 | 99.00th=[32113], 99.50th=[35390], 99.90th=[42730], 99.95th=[42730], 00:42:57.234 | 99.99th=[48497] 00:42:57.234 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2316.37, stdev=111.11, samples=19 00:42:57.234 iops : min= 544, max= 640, avg=578.95, stdev=27.69, samples=19 00:42:57.234 lat (msec) : 20=0.35%, 50=99.65% 00:42:57.234 cpu : usr=98.60%, sys=1.05%, ctx=11, majf=0, minf=9 00:42:57.234 IO depths : 1=5.3%, 2=11.5%, 4=24.9%, 8=51.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename1: (groupid=0, jobs=1): err= 0: pid=1306130: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=579, BW=2316KiB/s (2372kB/s)(22.6MiB/10002msec) 00:42:57.234 slat (nsec): min=8540, max=98679, avg=51370.19, stdev=17663.33 00:42:57.234 clat (usec): min=16401, max=46639, avg=27220.46, stdev=2264.31 00:42:57.234 lat (usec): min=16464, max=46653, avg=27271.83, stdev=2263.32 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27919], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:42:57.234 | 99.00th=[31851], 99.50th=[32637], 99.90th=[46400], 99.95th=[46400], 00:42:57.234 | 99.99th=[46400] 00:42:57.234 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2316.11, stdev=119.36, samples=19 00:42:57.234 iops : min= 542, max= 640, avg=578.84, stdev=29.81, samples=19 00:42:57.234 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.234 cpu : usr=98.10%, sys=1.29%, ctx=94, majf=0, minf=9 00:42:57.234 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename1: (groupid=0, jobs=1): err= 0: pid=1306131: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=580, BW=2322KiB/s (2378kB/s)(22.7MiB/10005msec) 00:42:57.234 slat (nsec): min=6513, max=87912, avg=29017.29, stdev=18530.43 00:42:57.234 clat (usec): min=17922, max=32658, avg=27248.85, stdev=1936.00 00:42:57.234 lat (usec): min=17937, max=32725, avg=27277.87, stdev=1936.46 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.234 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26608], 60.00th=[27132], 00:42:57.234 | 70.00th=[27919], 80.00th=[28967], 90.00th=[30540], 95.00th=[30802], 00:42:57.234 | 99.00th=[31589], 99.50th=[32113], 99.90th=[32375], 99.95th=[32637], 00:42:57.234 | 99.99th=[32637] 00:42:57.234 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2322.37, stdev=129.91, samples=19 00:42:57.234 iops : min= 542, max= 640, avg=580.32, stdev=32.49, samples=19 00:42:57.234 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.234 cpu : usr=99.15%, sys=0.45%, ctx=18, majf=0, minf=9 00:42:57.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename1: (groupid=0, jobs=1): err= 0: pid=1306132: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=582, BW=2330KiB/s (2386kB/s)(22.8MiB/10024msec) 00:42:57.234 slat (usec): min=9, max=130, avg=60.58, stdev=20.85 00:42:57.234 clat (usec): min=10008, max=32678, avg=26937.08, stdev=2263.42 00:42:57.234 lat (usec): min=10069, max=32709, avg=26997.66, stdev=2268.04 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[18482], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30016], 95.00th=[30540], 00:42:57.234 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32375], 99.95th=[32637], 00:42:57.234 | 99.99th=[32637] 00:42:57.234 bw ( KiB/s): min= 2048, max= 2688, per=4.17%, avg=2328.80, stdev=163.47, samples=20 00:42:57.234 iops : min= 512, max= 672, avg=582.10, stdev=40.82, samples=20 00:42:57.234 lat (msec) : 20=1.10%, 50=98.90% 00:42:57.234 cpu : usr=98.99%, sys=0.62%, ctx=10, majf=0, minf=9 00:42:57.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename2: (groupid=0, jobs=1): err= 0: pid=1306133: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=579, BW=2316KiB/s (2372kB/s)(22.6MiB/10002msec) 00:42:57.234 slat (usec): min=5, max=100, avg=55.63, stdev=15.12 00:42:57.234 clat (usec): min=16294, max=46549, avg=27145.23, stdev=2229.28 00:42:57.234 lat (usec): min=16340, max=46566, avg=27200.86, stdev=2228.19 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30278], 95.00th=[30802], 00:42:57.234 | 99.00th=[31589], 99.50th=[32375], 99.90th=[46400], 99.95th=[46400], 00:42:57.234 | 99.99th=[46400] 00:42:57.234 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2316.11, stdev=119.36, samples=19 00:42:57.234 iops : min= 542, max= 640, avg=578.84, stdev=29.81, samples=19 00:42:57.234 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.234 cpu : usr=98.62%, sys=1.01%, ctx=15, majf=0, minf=9 00:42:57.234 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename2: (groupid=0, jobs=1): err= 0: pid=1306134: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=578, BW=2316KiB/s (2371kB/s)(22.6MiB/10004msec) 00:42:57.234 slat (usec): min=4, max=104, avg=55.54, stdev=15.27 00:42:57.234 clat (usec): min=16276, max=47327, avg=27138.77, stdev=2241.87 00:42:57.234 lat (usec): min=16327, max=47340, avg=27194.31, stdev=2240.55 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:42:57.234 | 99.00th=[31589], 99.50th=[32113], 99.90th=[47449], 99.95th=[47449], 00:42:57.234 | 99.99th=[47449] 00:42:57.234 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2316.11, stdev=119.36, samples=19 00:42:57.234 iops : min= 542, max= 640, avg=578.84, stdev=29.81, samples=19 00:42:57.234 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.234 cpu : usr=98.31%, sys=1.16%, ctx=42, majf=0, minf=9 00:42:57.234 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.234 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.234 filename2: (groupid=0, jobs=1): err= 0: pid=1306135: Mon Dec 16 16:48:44 2024 00:42:57.234 read: IOPS=579, BW=2316KiB/s (2372kB/s)(22.6MiB/10002msec) 00:42:57.234 slat (usec): min=7, max=133, avg=57.81, stdev=22.01 00:42:57.234 clat (usec): min=16243, max=46511, avg=27088.18, stdev=2197.78 00:42:57.234 lat (usec): min=16252, max=46525, avg=27145.99, stdev=2199.11 00:42:57.234 clat percentiles (usec): 00:42:57.234 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.234 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.234 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30278], 95.00th=[30540], 00:42:57.234 | 99.00th=[31589], 99.50th=[32113], 99.90th=[46400], 99.95th=[46400], 00:42:57.234 | 99.99th=[46400] 00:42:57.234 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2316.11, stdev=119.36, samples=19 00:42:57.235 iops : min= 542, max= 640, avg=578.84, stdev=29.81, samples=19 00:42:57.235 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.235 cpu : usr=98.92%, sys=0.66%, ctx=25, majf=0, minf=9 00:42:57.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.235 filename2: (groupid=0, jobs=1): err= 0: pid=1306136: Mon Dec 16 16:48:44 2024 00:42:57.235 read: IOPS=581, BW=2328KiB/s (2384kB/s)(22.8MiB/10011msec) 00:42:57.235 slat (usec): min=4, max=136, avg=47.90, stdev=26.79 00:42:57.235 clat (usec): min=11676, max=49376, avg=27060.09, stdev=2429.42 00:42:57.235 lat (usec): min=11688, max=49413, avg=27107.99, stdev=2433.86 00:42:57.235 clat percentiles (usec): 00:42:57.235 | 1.00th=[19530], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:42:57.235 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.235 | 70.00th=[27395], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:42:57.235 | 99.00th=[32900], 99.50th=[35914], 99.90th=[49021], 99.95th=[49021], 00:42:57.235 | 99.99th=[49546] 00:42:57.235 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2330.21, stdev=113.89, samples=19 00:42:57.235 iops : min= 544, max= 640, avg=582.32, stdev=28.44, samples=19 00:42:57.235 lat (msec) : 20=1.13%, 50=98.87% 00:42:57.235 cpu : usr=98.88%, sys=0.69%, ctx=31, majf=0, minf=9 00:42:57.235 IO depths : 1=4.4%, 2=10.5%, 4=24.5%, 8=52.5%, 16=8.1%, 32=0.0%, >=64=0.0% 00:42:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 issued rwts: total=5826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.235 filename2: (groupid=0, jobs=1): err= 0: pid=1306137: Mon Dec 16 16:48:44 2024 00:42:57.235 read: IOPS=582, BW=2330KiB/s (2386kB/s)(22.8MiB/10024msec) 00:42:57.235 slat (usec): min=7, max=141, avg=62.07, stdev=19.21 00:42:57.235 clat (usec): min=10033, max=32627, avg=26930.96, stdev=2287.22 00:42:57.235 lat (usec): min=10106, max=32709, avg=26993.03, stdev=2290.22 00:42:57.235 clat percentiles (usec): 00:42:57.235 | 1.00th=[18744], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:42:57.235 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:42:57.235 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30278], 95.00th=[30540], 00:42:57.235 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32375], 99.95th=[32637], 00:42:57.235 | 99.99th=[32637] 00:42:57.235 bw ( KiB/s): min= 2048, max= 2688, per=4.17%, avg=2328.80, stdev=163.47, samples=20 00:42:57.235 iops : min= 512, max= 672, avg=582.10, stdev=40.82, samples=20 00:42:57.235 lat (msec) : 20=1.10%, 50=98.90% 00:42:57.235 cpu : usr=99.07%, sys=0.54%, ctx=15, majf=0, minf=9 00:42:57.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.235 filename2: (groupid=0, jobs=1): err= 0: pid=1306138: Mon Dec 16 16:48:44 2024 00:42:57.235 read: IOPS=579, BW=2319KiB/s (2375kB/s)(22.7MiB/10016msec) 00:42:57.235 slat (usec): min=6, max=112, avg=26.80, stdev=17.23 00:42:57.235 clat (usec): min=15855, max=36001, avg=27338.01, stdev=2089.09 00:42:57.235 lat (usec): min=15907, max=36009, avg=27364.81, stdev=2086.14 00:42:57.235 clat percentiles (usec): 00:42:57.235 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:42:57.235 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.235 | 70.00th=[27919], 80.00th=[29492], 90.00th=[30540], 95.00th=[31065], 00:42:57.235 | 99.00th=[31851], 99.50th=[32113], 99.90th=[35390], 99.95th=[35914], 00:42:57.235 | 99.99th=[35914] 00:42:57.235 bw ( KiB/s): min= 2048, max= 2560, per=4.16%, avg=2322.63, stdev=143.20, samples=19 00:42:57.235 iops : min= 512, max= 640, avg=580.42, stdev=35.81, samples=19 00:42:57.235 lat (msec) : 20=0.28%, 50=99.72% 00:42:57.235 cpu : usr=98.36%, sys=1.04%, ctx=75, majf=0, minf=9 00:42:57.235 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.235 filename2: (groupid=0, jobs=1): err= 0: pid=1306139: Mon Dec 16 16:48:44 2024 00:42:57.235 read: IOPS=580, BW=2322KiB/s (2378kB/s)(22.7MiB/10005msec) 00:42:57.235 slat (nsec): min=7643, max=69628, avg=28180.13, stdev=12865.49 00:42:57.235 clat (usec): min=10750, max=46243, avg=27327.35, stdev=2041.64 00:42:57.235 lat (usec): min=10759, max=46265, avg=27355.53, stdev=2040.70 00:42:57.235 clat percentiles (usec): 00:42:57.235 | 1.00th=[24511], 5.00th=[25035], 10.00th=[25035], 20.00th=[25560], 00:42:57.235 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:42:57.235 | 70.00th=[27919], 80.00th=[29230], 90.00th=[30540], 95.00th=[30802], 00:42:57.235 | 99.00th=[31589], 99.50th=[32375], 99.90th=[32637], 99.95th=[41157], 00:42:57.235 | 99.99th=[46400] 00:42:57.235 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2322.37, stdev=129.91, samples=19 00:42:57.235 iops : min= 542, max= 640, avg=580.32, stdev=32.49, samples=19 00:42:57.235 lat (msec) : 20=0.34%, 50=99.66% 00:42:57.235 cpu : usr=97.64%, sys=1.45%, ctx=154, majf=0, minf=9 00:42:57.235 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.235 filename2: (groupid=0, jobs=1): err= 0: pid=1306141: Mon Dec 16 16:48:44 2024 00:42:57.235 read: IOPS=581, BW=2325KiB/s (2381kB/s)(22.8MiB/10019msec) 00:42:57.235 slat (nsec): min=7379, max=59012, avg=15824.23, stdev=7871.98 00:42:57.235 clat (usec): min=15714, max=32751, avg=27397.34, stdev=2031.91 00:42:57.235 lat (usec): min=15722, max=32767, avg=27413.17, stdev=2031.30 00:42:57.235 clat percentiles (usec): 00:42:57.235 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25297], 20.00th=[25822], 00:42:57.235 | 30.00th=[26608], 40.00th=[26870], 50.00th=[26870], 60.00th=[27132], 00:42:57.235 | 70.00th=[27919], 80.00th=[29230], 90.00th=[30540], 95.00th=[31065], 00:42:57.235 | 99.00th=[31851], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:42:57.235 | 99.99th=[32637] 00:42:57.235 bw ( KiB/s): min= 2048, max= 2560, per=4.16%, avg=2321.60, stdev=150.25, samples=20 00:42:57.235 iops : min= 512, max= 640, avg=580.20, stdev=37.47, samples=20 00:42:57.235 lat (msec) : 20=0.55%, 50=99.45% 00:42:57.235 cpu : usr=98.70%, sys=0.94%, ctx=12, majf=0, minf=9 00:42:57.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.235 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:57.235 00:42:57.235 Run status group 0 (all jobs): 00:42:57.235 READ: bw=54.5MiB/s (57.2MB/s), 2316KiB/s-2385KiB/s (2371kB/s-2443kB/s), io=546MiB (573MB), run=10002-10024msec 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 bdev_null0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 [2024-12-16 16:48:44.814324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 bdev_null1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:57.235 { 00:42:57.235 "params": { 00:42:57.235 "name": "Nvme$subsystem", 00:42:57.235 "trtype": "$TEST_TRANSPORT", 00:42:57.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:57.235 "adrfam": "ipv4", 00:42:57.235 "trsvcid": "$NVMF_PORT", 00:42:57.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:57.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:57.235 "hdgst": ${hdgst:-false}, 00:42:57.235 "ddgst": ${ddgst:-false} 00:42:57.235 }, 00:42:57.235 "method": "bdev_nvme_attach_controller" 00:42:57.235 } 00:42:57.235 EOF 00:42:57.235 )") 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:57.235 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:57.235 { 00:42:57.235 "params": { 00:42:57.235 "name": "Nvme$subsystem", 00:42:57.235 "trtype": "$TEST_TRANSPORT", 00:42:57.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:57.235 "adrfam": "ipv4", 00:42:57.235 "trsvcid": "$NVMF_PORT", 00:42:57.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:57.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:57.235 "hdgst": ${hdgst:-false}, 00:42:57.235 "ddgst": ${ddgst:-false} 00:42:57.235 }, 00:42:57.235 "method": "bdev_nvme_attach_controller" 00:42:57.235 } 00:42:57.235 EOF 00:42:57.235 )") 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:57.236 "params": { 00:42:57.236 "name": "Nvme0", 00:42:57.236 "trtype": "tcp", 00:42:57.236 "traddr": "10.0.0.2", 00:42:57.236 "adrfam": "ipv4", 00:42:57.236 "trsvcid": "4420", 00:42:57.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:57.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:57.236 "hdgst": false, 00:42:57.236 "ddgst": false 00:42:57.236 }, 00:42:57.236 "method": "bdev_nvme_attach_controller" 00:42:57.236 },{ 00:42:57.236 "params": { 00:42:57.236 "name": "Nvme1", 00:42:57.236 "trtype": "tcp", 00:42:57.236 "traddr": "10.0.0.2", 00:42:57.236 "adrfam": "ipv4", 00:42:57.236 "trsvcid": "4420", 00:42:57.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:57.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:57.236 "hdgst": false, 00:42:57.236 "ddgst": false 00:42:57.236 }, 00:42:57.236 "method": "bdev_nvme_attach_controller" 00:42:57.236 }' 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:57.236 16:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.236 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:57.236 ... 00:42:57.236 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:57.236 ... 00:42:57.236 fio-3.35 00:42:57.236 Starting 4 threads 00:43:02.503 00:43:02.503 filename0: (groupid=0, jobs=1): err= 0: pid=1308024: Mon Dec 16 16:48:50 2024 00:43:02.503 read: IOPS=2756, BW=21.5MiB/s (22.6MB/s)(108MiB/5002msec) 00:43:02.503 slat (nsec): min=6148, max=48324, avg=8927.23, stdev=3101.95 00:43:02.503 clat (usec): min=788, max=5587, avg=2876.91, stdev=391.25 00:43:02.503 lat (usec): min=802, max=5593, avg=2885.83, stdev=391.02 00:43:02.503 clat percentiles (usec): 00:43:02.503 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:43:02.503 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:43:02.503 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3425], 00:43:02.503 | 99.00th=[ 4015], 99.50th=[ 4490], 99.90th=[ 5145], 99.95th=[ 5211], 00:43:02.503 | 99.99th=[ 5604] 00:43:02.503 bw ( KiB/s): min=21392, max=22960, per=26.04%, avg=22058.67, stdev=588.37, samples=9 00:43:02.503 iops : min= 2674, max= 2870, avg=2757.33, stdev=73.55, samples=9 00:43:02.503 lat (usec) : 1000=0.33% 00:43:02.503 lat (msec) : 2=1.52%, 4=97.14%, 10=1.00% 00:43:02.503 cpu : usr=96.02%, sys=3.66%, ctx=10, majf=0, minf=0 00:43:02.503 IO depths : 1=0.1%, 2=3.4%, 4=67.3%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 issued rwts: total=13786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:02.503 filename0: (groupid=0, jobs=1): err= 0: pid=1308025: Mon Dec 16 16:48:50 2024 00:43:02.503 read: IOPS=2615, BW=20.4MiB/s (21.4MB/s)(102MiB/5007msec) 00:43:02.503 slat (nsec): min=6139, max=31626, avg=8957.96, stdev=3012.88 00:43:02.503 clat (usec): min=661, max=8440, avg=3032.93, stdev=457.92 00:43:02.503 lat (usec): min=672, max=8451, avg=3041.89, stdev=457.70 00:43:02.503 clat percentiles (usec): 00:43:02.503 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2868], 00:43:02.503 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:02.503 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3884], 00:43:02.503 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 8291], 00:43:02.503 | 99.99th=[ 8455] 00:43:02.503 bw ( KiB/s): min=20000, max=21776, per=24.72%, avg=20939.20, stdev=481.60, samples=10 00:43:02.503 iops : min= 2500, max= 2722, avg=2617.40, stdev=60.20, samples=10 00:43:02.503 lat (usec) : 750=0.02% 00:43:02.503 lat (msec) : 2=0.71%, 4=95.03%, 10=4.25% 00:43:02.503 cpu : usr=95.83%, sys=3.88%, ctx=9, majf=0, minf=0 00:43:02.503 IO depths : 1=0.3%, 2=3.2%, 4=67.9%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 issued rwts: total=13095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:02.503 filename1: (groupid=0, jobs=1): err= 0: pid=1308026: Mon Dec 16 16:48:50 2024 00:43:02.503 read: IOPS=2627, BW=20.5MiB/s (21.5MB/s)(103MiB/5006msec) 00:43:02.503 slat (nsec): min=6156, max=33049, avg=8708.09, stdev=3053.79 00:43:02.503 clat (usec): min=583, max=8468, avg=3018.69, stdev=410.87 00:43:02.503 lat (usec): min=595, max=8477, avg=3027.40, stdev=410.83 00:43:02.503 clat percentiles (usec): 00:43:02.503 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2835], 00:43:02.503 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:02.503 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3458], 95.00th=[ 3720], 00:43:02.503 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5342], 99.95th=[ 6063], 00:43:02.503 | 99.99th=[ 8455] 00:43:02.503 bw ( KiB/s): min=20256, max=21856, per=24.83%, avg=21035.20, stdev=523.32, samples=10 00:43:02.503 iops : min= 2532, max= 2732, avg=2629.40, stdev=65.42, samples=10 00:43:02.503 lat (usec) : 750=0.01% 00:43:02.503 lat (msec) : 2=0.76%, 4=96.24%, 10=3.00% 00:43:02.503 cpu : usr=95.72%, sys=3.96%, ctx=6, majf=0, minf=9 00:43:02.503 IO depths : 1=0.3%, 2=2.5%, 4=70.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 issued rwts: total=13152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:02.503 filename1: (groupid=0, jobs=1): err= 0: pid=1308027: Mon Dec 16 16:48:50 2024 00:43:02.503 read: IOPS=2594, BW=20.3MiB/s (21.2MB/s)(101MiB/5006msec) 00:43:02.503 slat (nsec): min=6168, max=29426, avg=8780.74, stdev=2916.02 00:43:02.503 clat (usec): min=652, max=8146, avg=3059.66, stdev=441.58 00:43:02.503 lat (usec): min=658, max=8153, avg=3068.44, stdev=441.48 00:43:02.503 clat percentiles (usec): 00:43:02.503 | 1.00th=[ 2147], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2900], 00:43:02.503 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:02.503 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3523], 95.00th=[ 3851], 00:43:02.503 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 7963], 00:43:02.503 | 99.99th=[ 8160] 00:43:02.503 bw ( KiB/s): min=19904, max=21344, per=24.52%, avg=20769.00, stdev=458.42, samples=10 00:43:02.503 iops : min= 2488, max= 2668, avg=2596.10, stdev=57.28, samples=10 00:43:02.503 lat (usec) : 750=0.01%, 1000=0.03% 00:43:02.503 lat (msec) : 2=0.60%, 4=95.50%, 10=3.86% 00:43:02.503 cpu : usr=96.26%, sys=3.44%, ctx=7, majf=0, minf=9 00:43:02.503 IO depths : 1=0.1%, 2=2.0%, 4=69.2%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.503 issued rwts: total=12986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.503 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:02.503 00:43:02.503 Run status group 0 (all jobs): 00:43:02.503 READ: bw=82.7MiB/s (86.7MB/s), 20.3MiB/s-21.5MiB/s (21.2MB/s-22.6MB/s), io=414MiB (434MB), run=5002-5007msec 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 00:43:02.763 real 0m24.271s 00:43:02.763 user 4m51.607s 00:43:02.763 sys 0m4.524s 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 ************************************ 00:43:02.763 END TEST fio_dif_rand_params 00:43:02.763 ************************************ 00:43:02.763 16:48:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:02.763 16:48:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:02.763 16:48:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 ************************************ 00:43:02.763 START TEST fio_dif_digest 00:43:02.763 ************************************ 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 bdev_null0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:02.763 [2024-12-16 16:48:51.279117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:02.763 { 00:43:02.763 "params": { 00:43:02.763 "name": "Nvme$subsystem", 00:43:02.763 "trtype": "$TEST_TRANSPORT", 00:43:02.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:02.763 "adrfam": "ipv4", 00:43:02.763 "trsvcid": "$NVMF_PORT", 00:43:02.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:02.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:02.763 "hdgst": ${hdgst:-false}, 00:43:02.763 "ddgst": ${ddgst:-false} 00:43:02.763 }, 00:43:02.763 "method": "bdev_nvme_attach_controller" 00:43:02.763 } 00:43:02.763 EOF 00:43:02.763 )") 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:02.763 16:48:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:02.763 "params": { 00:43:02.763 "name": "Nvme0", 00:43:02.763 "trtype": "tcp", 00:43:02.763 "traddr": "10.0.0.2", 00:43:02.763 "adrfam": "ipv4", 00:43:02.763 "trsvcid": "4420", 00:43:02.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.763 "hdgst": true, 00:43:02.764 "ddgst": true 00:43:02.764 }, 00:43:02.764 "method": "bdev_nvme_attach_controller" 00:43:02.764 }' 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:02.764 16:48:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.335 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:03.335 ... 00:43:03.335 fio-3.35 00:43:03.335 Starting 3 threads 00:43:15.620 00:43:15.620 filename0: (groupid=0, jobs=1): err= 0: pid=1309154: Mon Dec 16 16:49:02 2024 00:43:15.620 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(363MiB/10007msec) 00:43:15.620 slat (nsec): min=6435, max=42140, avg=13018.47, stdev=5009.90 00:43:15.620 clat (usec): min=5407, max=13627, avg=10337.07, stdev=749.73 00:43:15.620 lat (usec): min=5417, max=13640, avg=10350.09, stdev=749.52 00:43:15.620 clat percentiles (usec): 00:43:15.620 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:43:15.620 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:43:15.620 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:43:15.620 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13042], 99.95th=[13304], 00:43:15.620 | 99.99th=[13566] 00:43:15.620 bw ( KiB/s): min=34816, max=38144, per=35.17%, avg=37079.58, stdev=810.73, samples=19 00:43:15.620 iops : min= 272, max= 298, avg=289.68, stdev= 6.33, samples=19 00:43:15.620 lat (msec) : 10=31.31%, 20=68.69% 00:43:15.620 cpu : usr=94.80%, sys=4.91%, ctx=23, majf=0, minf=94 00:43:15.620 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.620 issued rwts: total=2900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.620 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:15.620 filename0: (groupid=0, jobs=1): err= 0: pid=1309155: Mon Dec 16 16:49:02 2024 00:43:15.620 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10045msec) 00:43:15.620 slat (nsec): min=6445, max=52501, avg=13873.52, stdev=5496.11 00:43:15.620 clat (usec): min=8714, max=47866, avg=11012.06, stdev=1232.52 00:43:15.620 lat (usec): min=8726, max=47878, avg=11025.94, stdev=1232.66 00:43:15.620 clat percentiles (usec): 00:43:15.620 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:43:15.620 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:43:15.620 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:43:15.620 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14091], 99.95th=[46924], 00:43:15.620 | 99.99th=[47973] 00:43:15.620 bw ( KiB/s): min=33536, max=36096, per=33.11%, avg=34905.60, stdev=577.08, samples=20 00:43:15.620 iops : min= 262, max= 282, avg=272.70, stdev= 4.51, samples=20 00:43:15.620 lat (msec) : 10=8.17%, 20=91.76%, 50=0.07% 00:43:15.620 cpu : usr=94.43%, sys=5.27%, ctx=16, majf=0, minf=98 00:43:15.620 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.620 issued rwts: total=2729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.620 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:15.620 filename0: (groupid=0, jobs=1): err= 0: pid=1309156: Mon Dec 16 16:49:02 2024 00:43:15.620 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(331MiB/10045msec) 00:43:15.621 slat (nsec): min=6516, max=52626, avg=13368.62, stdev=5188.45 00:43:15.621 clat (usec): min=9042, max=52773, avg=11365.95, stdev=1295.85 00:43:15.621 lat (usec): min=9055, max=52782, avg=11379.32, stdev=1295.88 00:43:15.621 clat percentiles (usec): 00:43:15.621 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:43:15.621 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:43:15.621 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:43:15.621 | 99.00th=[13435], 99.50th=[13698], 99.90th=[15008], 99.95th=[44827], 00:43:15.621 | 99.99th=[52691] 00:43:15.621 bw ( KiB/s): min=33024, max=34560, per=32.08%, avg=33817.60, stdev=388.69, samples=20 00:43:15.621 iops : min= 258, max= 270, avg=264.20, stdev= 3.04, samples=20 00:43:15.621 lat (msec) : 10=3.59%, 20=96.33%, 50=0.04%, 100=0.04% 00:43:15.621 cpu : usr=94.62%, sys=5.07%, ctx=16, majf=0, minf=85 00:43:15.621 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:15.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.621 issued rwts: total=2644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.621 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:15.621 00:43:15.621 Run status group 0 (all jobs): 00:43:15.621 READ: bw=103MiB/s (108MB/s), 32.9MiB/s-36.2MiB/s (34.5MB/s-38.0MB/s), io=1034MiB (1084MB), run=10007-10045msec 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.621 00:43:15.621 real 0m11.212s 00:43:15.621 user 0m35.542s 00:43:15.621 sys 0m1.874s 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:15.621 16:49:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:15.621 ************************************ 00:43:15.621 END TEST fio_dif_digest 00:43:15.621 ************************************ 00:43:15.621 16:49:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:15.621 16:49:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:15.621 rmmod nvme_tcp 00:43:15.621 rmmod nvme_fabrics 00:43:15.621 rmmod nvme_keyring 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1300465 ']' 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1300465 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1300465 ']' 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1300465 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1300465 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1300465' 00:43:15.621 killing process with pid 1300465 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1300465 00:43:15.621 16:49:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1300465 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:15.621 16:49:02 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:16.999 Waiting for block devices as requested 00:43:16.999 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:16.999 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:17.258 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:17.258 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:17.258 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:17.517 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:17.517 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:17.517 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:17.776 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:17.776 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:17.776 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:17.776 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:18.035 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:18.035 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:18.035 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:18.294 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:18.294 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:18.294 16:49:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:18.294 16:49:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:18.294 16:49:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:20.828 16:49:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:20.828 00:43:20.828 real 1m13.871s 00:43:20.828 user 7m8.848s 00:43:20.828 sys 0m19.945s 00:43:20.828 16:49:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:20.828 16:49:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:20.828 ************************************ 00:43:20.828 END TEST nvmf_dif 00:43:20.828 ************************************ 00:43:20.828 16:49:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:20.828 16:49:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:20.828 16:49:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:20.828 16:49:08 -- common/autotest_common.sh@10 -- # set +x 00:43:20.828 ************************************ 00:43:20.828 START TEST nvmf_abort_qd_sizes 00:43:20.828 ************************************ 00:43:20.828 16:49:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:20.828 * Looking for test storage... 00:43:20.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:20.828 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:20.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.829 --rc genhtml_branch_coverage=1 00:43:20.829 --rc genhtml_function_coverage=1 00:43:20.829 --rc genhtml_legend=1 00:43:20.829 --rc geninfo_all_blocks=1 00:43:20.829 --rc geninfo_unexecuted_blocks=1 00:43:20.829 00:43:20.829 ' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:20.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.829 --rc genhtml_branch_coverage=1 00:43:20.829 --rc genhtml_function_coverage=1 00:43:20.829 --rc genhtml_legend=1 00:43:20.829 --rc geninfo_all_blocks=1 00:43:20.829 --rc geninfo_unexecuted_blocks=1 00:43:20.829 00:43:20.829 ' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:20.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.829 --rc genhtml_branch_coverage=1 00:43:20.829 --rc genhtml_function_coverage=1 00:43:20.829 --rc genhtml_legend=1 00:43:20.829 --rc geninfo_all_blocks=1 00:43:20.829 --rc geninfo_unexecuted_blocks=1 00:43:20.829 00:43:20.829 ' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:20.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:20.829 --rc genhtml_branch_coverage=1 00:43:20.829 --rc genhtml_function_coverage=1 00:43:20.829 --rc genhtml_legend=1 00:43:20.829 --rc geninfo_all_blocks=1 00:43:20.829 --rc geninfo_unexecuted_blocks=1 00:43:20.829 00:43:20.829 ' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:20.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:20.829 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:26.103 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:26.362 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:26.363 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:26.363 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:26.363 Found net devices under 0000:af:00.0: cvl_0_0 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:26.363 Found net devices under 0000:af:00.1: cvl_0_1 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:26.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:26.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:43:26.363 00:43:26.363 --- 10.0.0.2 ping statistics --- 00:43:26.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.363 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:26.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:26.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:43:26.363 00:43:26.363 --- 10.0.0.1 ping statistics --- 00:43:26.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.363 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:26.363 16:49:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:29.657 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:29.657 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:30.225 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1316806 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1316806 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1316806 ']' 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:30.225 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:30.483 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:30.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:30.483 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:30.483 16:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:30.483 [2024-12-16 16:49:18.877669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:30.483 [2024-12-16 16:49:18.877715] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.483 [2024-12-16 16:49:18.956439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:30.483 [2024-12-16 16:49:18.980973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:30.483 [2024-12-16 16:49:18.981009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:30.483 [2024-12-16 16:49:18.981016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:30.483 [2024-12-16 16:49:18.981022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:30.483 [2024-12-16 16:49:18.981027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:30.483 [2024-12-16 16:49:18.982444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:30.483 [2024-12-16 16:49:18.982553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:30.483 [2024-12-16 16:49:18.982678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.483 [2024-12-16 16:49:18.982679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:30.483 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:30.483 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:30.484 16:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:30.484 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:30.484 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:30.742 16:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:30.742 ************************************ 00:43:30.742 START TEST spdk_target_abort 00:43:30.742 ************************************ 00:43:30.742 16:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:30.742 16:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:30.742 16:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:30.742 16:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.742 16:49:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.024 spdk_targetn1 00:43:34.024 16:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.024 16:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:34.024 16:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.024 16:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.024 [2024-12-16 16:49:21.990281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.024 [2024-12-16 16:49:22.042584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:34.024 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:34.025 16:49:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:37.308 Initializing NVMe Controllers 00:43:37.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:37.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:37.308 Initialization complete. Launching workers. 00:43:37.308 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15071, failed: 0 00:43:37.308 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1355, failed to submit 13716 00:43:37.308 success 717, unsuccessful 638, failed 0 00:43:37.308 16:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:37.308 16:49:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:40.593 Initializing NVMe Controllers 00:43:40.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:40.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:40.593 Initialization complete. Launching workers. 00:43:40.593 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8532, failed: 0 00:43:40.593 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7299 00:43:40.593 success 323, unsuccessful 910, failed 0 00:43:40.593 16:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:40.593 16:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:43.881 Initializing NVMe Controllers 00:43:43.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:43.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:43.881 Initialization complete. Launching workers. 00:43:43.881 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38689, failed: 0 00:43:43.881 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2886, failed to submit 35803 00:43:43.881 success 599, unsuccessful 2287, failed 0 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.881 16:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1316806 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1316806 ']' 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1316806 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316806 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316806' 00:43:44.817 killing process with pid 1316806 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1316806 00:43:44.817 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1316806 00:43:45.076 00:43:45.076 real 0m14.282s 00:43:45.076 user 0m54.651s 00:43:45.076 sys 0m2.367s 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.076 ************************************ 00:43:45.076 END TEST spdk_target_abort 00:43:45.076 ************************************ 00:43:45.076 16:49:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:45.076 16:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:45.076 16:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:45.076 16:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:45.076 ************************************ 00:43:45.076 START TEST kernel_target_abort 00:43:45.076 ************************************ 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:45.076 16:49:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:47.610 Waiting for block devices as requested 00:43:47.869 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:47.869 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:47.869 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:48.128 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:48.128 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:48.128 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:48.128 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:48.387 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:48.387 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:48.387 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:48.646 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:48.646 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:48.646 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:48.906 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:48.906 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:48.906 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:48.906 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:49.165 No valid GPT data, bailing 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:49.165 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:49.424 00:43:49.424 Discovery Log Number of Records 2, Generation counter 2 00:43:49.424 =====Discovery Log Entry 0====== 00:43:49.424 trtype: tcp 00:43:49.424 adrfam: ipv4 00:43:49.424 subtype: current discovery subsystem 00:43:49.424 treq: not specified, sq flow control disable supported 00:43:49.424 portid: 1 00:43:49.424 trsvcid: 4420 00:43:49.424 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:49.424 traddr: 10.0.0.1 00:43:49.424 eflags: none 00:43:49.424 sectype: none 00:43:49.424 =====Discovery Log Entry 1====== 00:43:49.424 trtype: tcp 00:43:49.424 adrfam: ipv4 00:43:49.424 subtype: nvme subsystem 00:43:49.424 treq: not specified, sq flow control disable supported 00:43:49.424 portid: 1 00:43:49.424 trsvcid: 4420 00:43:49.424 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:49.424 traddr: 10.0.0.1 00:43:49.424 eflags: none 00:43:49.424 sectype: none 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:49.424 16:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:52.723 Initializing NVMe Controllers 00:43:52.723 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:52.723 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:52.723 Initialization complete. Launching workers. 00:43:52.723 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94758, failed: 0 00:43:52.723 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94758, failed to submit 0 00:43:52.723 success 0, unsuccessful 94758, failed 0 00:43:52.723 16:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:52.724 16:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:56.008 Initializing NVMe Controllers 00:43:56.008 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:56.008 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:56.008 Initialization complete. Launching workers. 00:43:56.008 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151218, failed: 0 00:43:56.009 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37942, failed to submit 113276 00:43:56.009 success 0, unsuccessful 37942, failed 0 00:43:56.009 16:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:56.009 16:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:58.539 Initializing NVMe Controllers 00:43:58.539 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:58.539 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:58.539 Initialization complete. Launching workers. 00:43:58.539 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142029, failed: 0 00:43:58.539 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35578, failed to submit 106451 00:43:58.539 success 0, unsuccessful 35578, failed 0 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:58.539 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:58.798 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:58.798 16:49:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:01.333 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:01.333 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:01.333 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:01.333 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:01.333 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:01.674 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:02.611 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:02.611 00:44:02.611 real 0m17.492s 00:44:02.611 user 0m9.164s 00:44:02.611 sys 0m4.991s 00:44:02.611 16:49:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:02.611 16:49:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:02.611 ************************************ 00:44:02.611 END TEST kernel_target_abort 00:44:02.611 ************************************ 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:02.611 rmmod nvme_tcp 00:44:02.611 rmmod nvme_fabrics 00:44:02.611 rmmod nvme_keyring 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1316806 ']' 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1316806 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1316806 ']' 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1316806 00:44:02.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1316806) - No such process 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1316806 is not found' 00:44:02.611 Process with pid 1316806 is not found 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:02.611 16:49:51 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:05.147 Waiting for block devices as requested 00:44:05.406 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:05.406 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:05.406 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:05.665 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:05.665 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:05.665 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:05.924 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:05.924 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:05.924 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:05.924 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:06.183 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:06.183 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:06.183 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:06.442 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:06.442 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:06.442 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:06.700 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:06.700 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:06.701 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:06.701 16:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:06.701 16:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:06.701 16:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:09.234 16:49:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:09.234 00:44:09.234 real 0m48.264s 00:44:09.234 user 1m8.129s 00:44:09.234 sys 0m15.975s 00:44:09.234 16:49:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:09.234 16:49:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:09.234 ************************************ 00:44:09.234 END TEST nvmf_abort_qd_sizes 00:44:09.234 ************************************ 00:44:09.234 16:49:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:09.234 16:49:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:09.234 16:49:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:09.234 16:49:57 -- common/autotest_common.sh@10 -- # set +x 00:44:09.234 ************************************ 00:44:09.234 START TEST keyring_file 00:44:09.234 ************************************ 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:09.234 * Looking for test storage... 00:44:09.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:09.234 16:49:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:09.234 16:49:57 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:09.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.234 --rc genhtml_branch_coverage=1 00:44:09.234 --rc genhtml_function_coverage=1 00:44:09.234 --rc genhtml_legend=1 00:44:09.234 --rc geninfo_all_blocks=1 00:44:09.235 --rc geninfo_unexecuted_blocks=1 00:44:09.235 00:44:09.235 ' 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.235 --rc genhtml_branch_coverage=1 00:44:09.235 --rc genhtml_function_coverage=1 00:44:09.235 --rc genhtml_legend=1 00:44:09.235 --rc geninfo_all_blocks=1 00:44:09.235 --rc geninfo_unexecuted_blocks=1 00:44:09.235 00:44:09.235 ' 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.235 --rc genhtml_branch_coverage=1 00:44:09.235 --rc genhtml_function_coverage=1 00:44:09.235 --rc genhtml_legend=1 00:44:09.235 --rc geninfo_all_blocks=1 00:44:09.235 --rc geninfo_unexecuted_blocks=1 00:44:09.235 00:44:09.235 ' 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:09.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.235 --rc genhtml_branch_coverage=1 00:44:09.235 --rc genhtml_function_coverage=1 00:44:09.235 --rc genhtml_legend=1 00:44:09.235 --rc geninfo_all_blocks=1 00:44:09.235 --rc geninfo_unexecuted_blocks=1 00:44:09.235 00:44:09.235 ' 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:09.235 16:49:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:09.235 16:49:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:09.235 16:49:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:09.235 16:49:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:09.235 16:49:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.235 16:49:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.235 16:49:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.235 16:49:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:09.235 16:49:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:09.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FODIUd06xn 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FODIUd06xn 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FODIUd06xn 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FODIUd06xn 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qTD17WOH9P 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:09.235 16:49:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qTD17WOH9P 00:44:09.235 16:49:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qTD17WOH9P 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qTD17WOH9P 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1325393 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:09.235 16:49:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1325393 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325393 ']' 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:09.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:09.235 16:49:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:09.235 [2024-12-16 16:49:57.693220] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:09.235 [2024-12-16 16:49:57.693265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325393 ] 00:44:09.235 [2024-12-16 16:49:57.768974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:09.235 [2024-12-16 16:49:57.791907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:09.495 16:49:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:09.495 16:49:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:09.495 16:49:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:09.495 16:49:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.495 16:49:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:09.495 [2024-12-16 16:49:57.992737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:09.495 null0 00:44:09.495 [2024-12-16 16:49:58.024791] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:09.495 [2024-12-16 16:49:58.025056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.495 16:49:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:09.495 [2024-12-16 16:49:58.052854] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:09.495 request: 00:44:09.495 { 00:44:09.495 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:09.495 "secure_channel": false, 00:44:09.495 "listen_address": { 00:44:09.495 "trtype": "tcp", 00:44:09.495 "traddr": "127.0.0.1", 00:44:09.495 "trsvcid": "4420" 00:44:09.495 }, 00:44:09.495 "method": "nvmf_subsystem_add_listener", 00:44:09.495 "req_id": 1 00:44:09.495 } 00:44:09.495 Got JSON-RPC error response 00:44:09.495 response: 00:44:09.495 { 00:44:09.495 "code": -32602, 00:44:09.495 "message": "Invalid parameters" 00:44:09.495 } 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:09.495 16:49:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=1325403 00:44:09.495 16:49:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1325403 /var/tmp/bperf.sock 00:44:09.495 16:49:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325403 ']' 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:09.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:09.495 16:49:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:09.753 [2024-12-16 16:49:58.108525] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:09.753 [2024-12-16 16:49:58.108565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325403 ] 00:44:09.753 [2024-12-16 16:49:58.183520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:09.753 [2024-12-16 16:49:58.205441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:09.753 16:49:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:09.753 16:49:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:09.753 16:49:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:09.753 16:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:10.011 16:49:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qTD17WOH9P 00:44:10.011 16:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qTD17WOH9P 00:44:10.269 16:49:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:10.269 16:49:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:10.269 16:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.269 16:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:10.269 16:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.269 16:49:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FODIUd06xn == \/\t\m\p\/\t\m\p\.\F\O\D\I\U\d\0\6\x\n ]] 00:44:10.269 16:49:58 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:10.269 16:49:58 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:10.269 16:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.269 16:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:10.269 16:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.526 16:49:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.qTD17WOH9P == \/\t\m\p\/\t\m\p\.\q\T\D\1\7\W\O\H\9\P ]] 00:44:10.526 16:49:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:10.526 16:49:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:10.526 16:49:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:10.526 16:49:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.526 16:49:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:10.526 16:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.784 16:49:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:10.784 16:49:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:10.784 16:49:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:10.784 16:49:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:10.784 16:49:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.784 16:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.784 16:49:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:11.042 16:49:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:11.042 16:49:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:11.042 16:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:11.042 [2024-12-16 16:49:59.619321] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:11.300 nvme0n1 00:44:11.300 16:49:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.300 16:49:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:11.300 16:49:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:11.300 16:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.559 16:50:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:11.559 16:50:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:11.817 Running I/O for 1 seconds... 00:44:12.751 19095.00 IOPS, 74.59 MiB/s 00:44:12.751 Latency(us) 00:44:12.751 [2024-12-16T15:50:01.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:12.751 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:12.751 nvme0n1 : 1.00 19142.97 74.78 0.00 0.00 6674.16 2746.27 15728.64 00:44:12.751 [2024-12-16T15:50:01.360Z] =================================================================================================================== 00:44:12.751 [2024-12-16T15:50:01.360Z] Total : 19142.97 74.78 0.00 0.00 6674.16 2746.27 15728.64 00:44:12.751 { 00:44:12.751 "results": [ 00:44:12.751 { 00:44:12.751 "job": "nvme0n1", 00:44:12.751 "core_mask": "0x2", 00:44:12.751 "workload": "randrw", 00:44:12.751 "percentage": 50, 00:44:12.751 "status": "finished", 00:44:12.751 "queue_depth": 128, 00:44:12.751 "io_size": 4096, 00:44:12.751 "runtime": 1.004233, 00:44:12.751 "iops": 19142.967817229666, 00:44:12.751 "mibps": 74.77721803605338, 00:44:12.751 "io_failed": 0, 00:44:12.751 "io_timeout": 0, 00:44:12.751 "avg_latency_us": 6674.1589901512, 00:44:12.751 "min_latency_us": 2746.270476190476, 00:44:12.751 "max_latency_us": 15728.64 00:44:12.751 } 00:44:12.751 ], 00:44:12.751 "core_count": 1 00:44:12.751 } 00:44:12.751 16:50:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:12.752 16:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:13.010 16:50:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:13.010 16:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:13.010 16:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:13.010 16:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:13.010 16:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:13.010 16:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.267 16:50:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:13.267 16:50:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:13.267 16:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:13.267 16:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:13.267 16:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:13.267 16:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:13.267 16:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.267 16:50:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:13.267 16:50:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:13.267 16:50:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:13.267 16:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:13.525 [2024-12-16 16:50:01.996615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:13.525 [2024-12-16 16:50:01.996919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242a6a0 (107): Transport endpoint is not connected 00:44:13.525 [2024-12-16 16:50:01.997913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242a6a0 (9): Bad file descriptor 00:44:13.525 [2024-12-16 16:50:01.998914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:13.525 [2024-12-16 16:50:01.998924] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:13.525 [2024-12-16 16:50:01.998931] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:13.525 [2024-12-16 16:50:01.998940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:13.525 request: 00:44:13.525 { 00:44:13.525 "name": "nvme0", 00:44:13.525 "trtype": "tcp", 00:44:13.525 "traddr": "127.0.0.1", 00:44:13.525 "adrfam": "ipv4", 00:44:13.525 "trsvcid": "4420", 00:44:13.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:13.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:13.525 "prchk_reftag": false, 00:44:13.525 "prchk_guard": false, 00:44:13.525 "hdgst": false, 00:44:13.525 "ddgst": false, 00:44:13.525 "psk": "key1", 00:44:13.525 "allow_unrecognized_csi": false, 00:44:13.525 "method": "bdev_nvme_attach_controller", 00:44:13.525 "req_id": 1 00:44:13.525 } 00:44:13.525 Got JSON-RPC error response 00:44:13.525 response: 00:44:13.525 { 00:44:13.525 "code": -5, 00:44:13.525 "message": "Input/output error" 00:44:13.525 } 00:44:13.525 16:50:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:13.525 16:50:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:13.525 16:50:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:13.525 16:50:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:13.525 16:50:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:13.525 16:50:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:13.525 16:50:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:13.525 16:50:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:13.525 16:50:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:13.525 16:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.796 16:50:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:13.797 16:50:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:13.797 16:50:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:13.797 16:50:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:13.797 16:50:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:13.797 16:50:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:13.797 16:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.797 16:50:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:13.797 16:50:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:13.797 16:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:14.054 16:50:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:14.054 16:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:14.312 16:50:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:14.312 16:50:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:14.312 16:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:14.569 16:50:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:14.569 16:50:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FODIUd06xn 00:44:14.569 16:50:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:14.569 16:50:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:14.569 16:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:14.825 [2024-12-16 16:50:03.185124] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FODIUd06xn': 0100660 00:44:14.825 [2024-12-16 16:50:03.185149] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:14.825 request: 00:44:14.825 { 00:44:14.825 "name": "key0", 00:44:14.825 "path": "/tmp/tmp.FODIUd06xn", 00:44:14.825 "method": "keyring_file_add_key", 00:44:14.825 "req_id": 1 00:44:14.825 } 00:44:14.825 Got JSON-RPC error response 00:44:14.826 response: 00:44:14.826 { 00:44:14.826 "code": -1, 00:44:14.826 "message": "Operation not permitted" 00:44:14.826 } 00:44:14.826 16:50:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:14.826 16:50:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:14.826 16:50:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:14.826 16:50:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:14.826 16:50:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FODIUd06xn 00:44:14.826 16:50:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:14.826 16:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FODIUd06xn 00:44:14.826 16:50:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FODIUd06xn 00:44:14.826 16:50:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:14.826 16:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:14.826 16:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:14.826 16:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:14.826 16:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:14.826 16:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:15.083 16:50:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:15.083 16:50:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:15.083 16:50:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:15.083 16:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:15.342 [2024-12-16 16:50:03.778690] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FODIUd06xn': No such file or directory 00:44:15.342 [2024-12-16 16:50:03.778713] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:15.342 [2024-12-16 16:50:03.778730] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:15.342 [2024-12-16 16:50:03.778737] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:15.342 [2024-12-16 16:50:03.778744] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:15.342 [2024-12-16 16:50:03.778750] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:15.342 request: 00:44:15.342 { 00:44:15.342 "name": "nvme0", 00:44:15.342 "trtype": "tcp", 00:44:15.342 "traddr": "127.0.0.1", 00:44:15.342 "adrfam": "ipv4", 00:44:15.342 "trsvcid": "4420", 00:44:15.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:15.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:15.342 "prchk_reftag": false, 00:44:15.342 "prchk_guard": false, 00:44:15.342 "hdgst": false, 00:44:15.342 "ddgst": false, 00:44:15.342 "psk": "key0", 00:44:15.342 "allow_unrecognized_csi": false, 00:44:15.342 "method": "bdev_nvme_attach_controller", 00:44:15.342 "req_id": 1 00:44:15.342 } 00:44:15.342 Got JSON-RPC error response 00:44:15.342 response: 00:44:15.342 { 00:44:15.342 "code": -19, 00:44:15.342 "message": "No such device" 00:44:15.342 } 00:44:15.342 16:50:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:15.342 16:50:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:15.342 16:50:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:15.342 16:50:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:15.342 16:50:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:15.342 16:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:15.601 16:50:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oCoRvuuqXL 00:44:15.601 16:50:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:15.601 16:50:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:15.601 16:50:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:15.601 16:50:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:15.601 16:50:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:15.601 16:50:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:15.601 16:50:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:15.601 16:50:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oCoRvuuqXL 00:44:15.601 16:50:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oCoRvuuqXL 00:44:15.601 16:50:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.oCoRvuuqXL 00:44:15.601 16:50:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oCoRvuuqXL 00:44:15.601 16:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oCoRvuuqXL 00:44:15.860 16:50:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:15.860 16:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:15.860 nvme0n1 00:44:16.119 16:50:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:16.119 16:50:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:16.119 16:50:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:16.119 16:50:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.119 16:50:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:16.119 16:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.119 16:50:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:16.119 16:50:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:16.119 16:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:16.377 16:50:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:16.377 16:50:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:16.377 16:50:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.377 16:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.377 16:50:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:16.641 16:50:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:16.641 16:50:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:16.641 16:50:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:16.641 16:50:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:16.641 16:50:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.641 16:50:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:16.641 16:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.902 16:50:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:16.902 16:50:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:16.902 16:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:16.902 16:50:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:16.902 16:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.902 16:50:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:17.160 16:50:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:17.160 16:50:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oCoRvuuqXL 00:44:17.160 16:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oCoRvuuqXL 00:44:17.418 16:50:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qTD17WOH9P 00:44:17.418 16:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qTD17WOH9P 00:44:17.676 16:50:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:17.676 16:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:17.676 nvme0n1 00:44:17.934 16:50:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:17.934 16:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:18.258 16:50:06 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:18.258 "subsystems": [ 00:44:18.258 { 00:44:18.258 "subsystem": "keyring", 00:44:18.258 "config": [ 00:44:18.258 { 00:44:18.258 "method": "keyring_file_add_key", 00:44:18.258 "params": { 00:44:18.258 "name": "key0", 00:44:18.258 "path": "/tmp/tmp.oCoRvuuqXL" 00:44:18.258 } 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "method": "keyring_file_add_key", 00:44:18.258 "params": { 00:44:18.258 "name": "key1", 00:44:18.258 "path": "/tmp/tmp.qTD17WOH9P" 00:44:18.258 } 00:44:18.258 } 00:44:18.258 ] 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "subsystem": "iobuf", 00:44:18.258 "config": [ 00:44:18.258 { 00:44:18.258 "method": "iobuf_set_options", 00:44:18.258 "params": { 00:44:18.258 "small_pool_count": 8192, 00:44:18.258 "large_pool_count": 1024, 00:44:18.258 "small_bufsize": 8192, 00:44:18.258 "large_bufsize": 135168, 00:44:18.258 "enable_numa": false 00:44:18.258 } 00:44:18.258 } 00:44:18.258 ] 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "subsystem": "sock", 00:44:18.258 "config": [ 00:44:18.258 { 00:44:18.258 "method": "sock_set_default_impl", 00:44:18.258 "params": { 00:44:18.258 "impl_name": "posix" 00:44:18.258 } 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "method": "sock_impl_set_options", 00:44:18.258 "params": { 00:44:18.258 "impl_name": "ssl", 00:44:18.258 "recv_buf_size": 4096, 00:44:18.258 "send_buf_size": 4096, 00:44:18.258 "enable_recv_pipe": true, 00:44:18.258 "enable_quickack": false, 00:44:18.258 "enable_placement_id": 0, 00:44:18.258 "enable_zerocopy_send_server": true, 00:44:18.258 "enable_zerocopy_send_client": false, 00:44:18.258 "zerocopy_threshold": 0, 00:44:18.258 "tls_version": 0, 00:44:18.258 "enable_ktls": false 00:44:18.258 } 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "method": "sock_impl_set_options", 00:44:18.258 "params": { 00:44:18.258 "impl_name": "posix", 00:44:18.258 "recv_buf_size": 2097152, 00:44:18.258 "send_buf_size": 2097152, 00:44:18.258 "enable_recv_pipe": true, 00:44:18.258 "enable_quickack": false, 00:44:18.258 "enable_placement_id": 0, 00:44:18.258 "enable_zerocopy_send_server": true, 00:44:18.258 "enable_zerocopy_send_client": false, 00:44:18.258 "zerocopy_threshold": 0, 00:44:18.258 "tls_version": 0, 00:44:18.258 "enable_ktls": false 00:44:18.258 } 00:44:18.258 } 00:44:18.258 ] 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "subsystem": "vmd", 00:44:18.258 "config": [] 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "subsystem": "accel", 00:44:18.258 "config": [ 00:44:18.258 { 00:44:18.258 "method": "accel_set_options", 00:44:18.258 "params": { 00:44:18.258 "small_cache_size": 128, 00:44:18.258 "large_cache_size": 16, 00:44:18.258 "task_count": 2048, 00:44:18.258 "sequence_count": 2048, 00:44:18.258 "buf_count": 2048 00:44:18.258 } 00:44:18.258 } 00:44:18.258 ] 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "subsystem": "bdev", 00:44:18.258 "config": [ 00:44:18.258 { 00:44:18.258 "method": "bdev_set_options", 00:44:18.258 "params": { 00:44:18.258 "bdev_io_pool_size": 65535, 00:44:18.258 "bdev_io_cache_size": 256, 00:44:18.258 "bdev_auto_examine": true, 00:44:18.258 "iobuf_small_cache_size": 128, 00:44:18.258 "iobuf_large_cache_size": 16 00:44:18.258 } 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "method": "bdev_raid_set_options", 00:44:18.258 "params": { 00:44:18.258 "process_window_size_kb": 1024, 00:44:18.258 "process_max_bandwidth_mb_sec": 0 00:44:18.258 } 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "method": "bdev_iscsi_set_options", 00:44:18.258 "params": { 00:44:18.258 "timeout_sec": 30 00:44:18.258 } 00:44:18.258 }, 00:44:18.258 { 00:44:18.258 "method": "bdev_nvme_set_options", 00:44:18.258 "params": { 00:44:18.258 "action_on_timeout": "none", 00:44:18.258 "timeout_us": 0, 00:44:18.258 "timeout_admin_us": 0, 00:44:18.258 "keep_alive_timeout_ms": 10000, 00:44:18.258 "arbitration_burst": 0, 00:44:18.258 "low_priority_weight": 0, 00:44:18.258 "medium_priority_weight": 0, 00:44:18.258 "high_priority_weight": 0, 00:44:18.258 "nvme_adminq_poll_period_us": 10000, 00:44:18.258 "nvme_ioq_poll_period_us": 0, 00:44:18.258 "io_queue_requests": 512, 00:44:18.258 "delay_cmd_submit": true, 00:44:18.258 "transport_retry_count": 4, 00:44:18.258 "bdev_retry_count": 3, 00:44:18.258 "transport_ack_timeout": 0, 00:44:18.258 "ctrlr_loss_timeout_sec": 0, 00:44:18.258 "reconnect_delay_sec": 0, 00:44:18.258 "fast_io_fail_timeout_sec": 0, 00:44:18.258 "disable_auto_failback": false, 00:44:18.258 "generate_uuids": false, 00:44:18.258 "transport_tos": 0, 00:44:18.258 "nvme_error_stat": false, 00:44:18.258 "rdma_srq_size": 0, 00:44:18.258 "io_path_stat": false, 00:44:18.258 "allow_accel_sequence": false, 00:44:18.258 "rdma_max_cq_size": 0, 00:44:18.258 "rdma_cm_event_timeout_ms": 0, 00:44:18.258 "dhchap_digests": [ 00:44:18.258 "sha256", 00:44:18.258 "sha384", 00:44:18.258 "sha512" 00:44:18.258 ], 00:44:18.258 "dhchap_dhgroups": [ 00:44:18.258 "null", 00:44:18.258 "ffdhe2048", 00:44:18.259 "ffdhe3072", 00:44:18.259 "ffdhe4096", 00:44:18.259 "ffdhe6144", 00:44:18.259 "ffdhe8192" 00:44:18.259 ], 00:44:18.259 "rdma_umr_per_io": false 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "bdev_nvme_attach_controller", 00:44:18.259 "params": { 00:44:18.259 "name": "nvme0", 00:44:18.259 "trtype": "TCP", 00:44:18.259 "adrfam": "IPv4", 00:44:18.259 "traddr": "127.0.0.1", 00:44:18.259 "trsvcid": "4420", 00:44:18.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:18.259 "prchk_reftag": false, 00:44:18.259 "prchk_guard": false, 00:44:18.259 "ctrlr_loss_timeout_sec": 0, 00:44:18.259 "reconnect_delay_sec": 0, 00:44:18.259 "fast_io_fail_timeout_sec": 0, 00:44:18.259 "psk": "key0", 00:44:18.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:18.259 "hdgst": false, 00:44:18.259 "ddgst": false, 00:44:18.259 "multipath": "multipath" 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "bdev_nvme_set_hotplug", 00:44:18.259 "params": { 00:44:18.259 "period_us": 100000, 00:44:18.259 "enable": false 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "bdev_wait_for_examine" 00:44:18.259 } 00:44:18.259 ] 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "subsystem": "nbd", 00:44:18.259 "config": [] 00:44:18.259 } 00:44:18.259 ] 00:44:18.259 }' 00:44:18.259 16:50:06 keyring_file -- keyring/file.sh@115 -- # killprocess 1325403 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325403 ']' 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325403 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325403 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325403' 00:44:18.259 killing process with pid 1325403 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@973 -- # kill 1325403 00:44:18.259 Received shutdown signal, test time was about 1.000000 seconds 00:44:18.259 00:44:18.259 Latency(us) 00:44:18.259 [2024-12-16T15:50:06.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:18.259 [2024-12-16T15:50:06.868Z] =================================================================================================================== 00:44:18.259 [2024-12-16T15:50:06.868Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@978 -- # wait 1325403 00:44:18.259 16:50:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=1326879 00:44:18.259 16:50:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1326879 /var/tmp/bperf.sock 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1326879 ']' 00:44:18.259 16:50:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:18.259 16:50:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:18.259 16:50:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:18.259 "subsystems": [ 00:44:18.259 { 00:44:18.259 "subsystem": "keyring", 00:44:18.259 "config": [ 00:44:18.259 { 00:44:18.259 "method": "keyring_file_add_key", 00:44:18.259 "params": { 00:44:18.259 "name": "key0", 00:44:18.259 "path": "/tmp/tmp.oCoRvuuqXL" 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "keyring_file_add_key", 00:44:18.259 "params": { 00:44:18.259 "name": "key1", 00:44:18.259 "path": "/tmp/tmp.qTD17WOH9P" 00:44:18.259 } 00:44:18.259 } 00:44:18.259 ] 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "subsystem": "iobuf", 00:44:18.259 "config": [ 00:44:18.259 { 00:44:18.259 "method": "iobuf_set_options", 00:44:18.259 "params": { 00:44:18.259 "small_pool_count": 8192, 00:44:18.259 "large_pool_count": 1024, 00:44:18.259 "small_bufsize": 8192, 00:44:18.259 "large_bufsize": 135168, 00:44:18.259 "enable_numa": false 00:44:18.259 } 00:44:18.259 } 00:44:18.259 ] 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "subsystem": "sock", 00:44:18.259 "config": [ 00:44:18.259 { 00:44:18.259 "method": "sock_set_default_impl", 00:44:18.259 "params": { 00:44:18.259 "impl_name": "posix" 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "sock_impl_set_options", 00:44:18.259 "params": { 00:44:18.259 "impl_name": "ssl", 00:44:18.259 "recv_buf_size": 4096, 00:44:18.259 "send_buf_size": 4096, 00:44:18.259 "enable_recv_pipe": true, 00:44:18.259 "enable_quickack": false, 00:44:18.259 "enable_placement_id": 0, 00:44:18.259 "enable_zerocopy_send_server": true, 00:44:18.259 "enable_zerocopy_send_client": false, 00:44:18.259 "zerocopy_threshold": 0, 00:44:18.259 "tls_version": 0, 00:44:18.259 "enable_ktls": false 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "sock_impl_set_options", 00:44:18.259 "params": { 00:44:18.259 "impl_name": "posix", 00:44:18.259 "recv_buf_size": 2097152, 00:44:18.259 "send_buf_size": 2097152, 00:44:18.259 "enable_recv_pipe": true, 00:44:18.259 "enable_quickack": false, 00:44:18.259 "enable_placement_id": 0, 00:44:18.259 "enable_zerocopy_send_server": true, 00:44:18.259 "enable_zerocopy_send_client": false, 00:44:18.259 "zerocopy_threshold": 0, 00:44:18.259 "tls_version": 0, 00:44:18.259 "enable_ktls": false 00:44:18.259 } 00:44:18.259 } 00:44:18.259 ] 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "subsystem": "vmd", 00:44:18.259 "config": [] 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "subsystem": "accel", 00:44:18.259 "config": [ 00:44:18.259 { 00:44:18.259 "method": "accel_set_options", 00:44:18.259 "params": { 00:44:18.259 "small_cache_size": 128, 00:44:18.259 "large_cache_size": 16, 00:44:18.259 "task_count": 2048, 00:44:18.259 "sequence_count": 2048, 00:44:18.259 "buf_count": 2048 00:44:18.259 } 00:44:18.259 } 00:44:18.259 ] 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "subsystem": "bdev", 00:44:18.259 "config": [ 00:44:18.259 { 00:44:18.259 "method": "bdev_set_options", 00:44:18.259 "params": { 00:44:18.259 "bdev_io_pool_size": 65535, 00:44:18.259 "bdev_io_cache_size": 256, 00:44:18.259 "bdev_auto_examine": true, 00:44:18.259 "iobuf_small_cache_size": 128, 00:44:18.259 "iobuf_large_cache_size": 16 00:44:18.259 } 00:44:18.259 }, 00:44:18.259 { 00:44:18.259 "method": "bdev_raid_set_options", 00:44:18.259 "params": { 00:44:18.260 "process_window_size_kb": 1024, 00:44:18.260 "process_max_bandwidth_mb_sec": 0 00:44:18.260 } 00:44:18.260 }, 00:44:18.260 { 00:44:18.260 "method": "bdev_iscsi_set_options", 00:44:18.260 "params": { 00:44:18.260 "timeout_sec": 30 00:44:18.260 } 00:44:18.260 }, 00:44:18.260 { 00:44:18.260 "method": "bdev_nvme_set_options", 00:44:18.260 "params": { 00:44:18.260 "action_on_timeout": "none", 00:44:18.260 "timeout_us": 0, 00:44:18.260 "timeout_admin_us": 0, 00:44:18.260 "keep_alive_timeout_ms": 10000, 00:44:18.260 "arbitration_burst": 0, 00:44:18.260 "low_priority_weight": 0, 00:44:18.260 "medium_priority_weight": 0, 00:44:18.260 "high_priority_weight": 0, 00:44:18.260 "nvme_adminq_poll_period_us": 10000, 00:44:18.260 "nvme_ioq_poll_period_us": 0, 00:44:18.260 "io_queue_requests": 512, 00:44:18.260 "delay_cmd_submit": true, 00:44:18.260 "transport_retry_count": 4, 00:44:18.260 "bdev_retry_count": 3, 00:44:18.260 "transport_ack_timeout": 0, 00:44:18.260 "ctrlr_loss_timeout_sec": 0, 00:44:18.260 "reconnect_delay_sec": 0, 00:44:18.260 "fast_io_fail_timeout_sec": 0, 00:44:18.260 "disable_auto_failback": false, 00:44:18.260 "generate_uuids": false, 00:44:18.260 "transport_tos": 0, 00:44:18.260 "nvme_error_stat": false, 00:44:18.260 "rdma_srq_size": 0, 00:44:18.260 16:50:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:18.260 "io_path_stat": false, 00:44:18.260 "allow_accel_sequence": false, 00:44:18.260 "rdma_max_cq_size": 0, 00:44:18.260 "rdma_cm_event_timeout_ms": 0, 00:44:18.260 "dhchap_digests": [ 00:44:18.260 "sha256", 00:44:18.260 "sha384", 00:44:18.260 "sha512" 00:44:18.260 ], 00:44:18.260 "dhchap_dhgroups": [ 00:44:18.260 "null", 00:44:18.260 "ffdhe2048", 00:44:18.260 "ffdhe3072", 00:44:18.260 "ffdhe4096", 00:44:18.260 "ffdhe6144", 00:44:18.260 "ffdhe8192" 00:44:18.260 ], 00:44:18.260 "rdma_umr_per_io": false 00:44:18.260 } 00:44:18.260 }, 00:44:18.260 { 00:44:18.260 "method": "bdev_nvme_attach_controller", 00:44:18.260 "params": { 00:44:18.260 "name": "nvme0", 00:44:18.260 "trtype": "TCP", 00:44:18.260 "adrfam": "IPv4", 00:44:18.260 "traddr": "127.0.0.1", 00:44:18.260 "trsvcid": "4420", 00:44:18.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:18.260 "prchk_reftag": false, 00:44:18.260 "prchk_guard": false, 00:44:18.260 "ctrlr_loss_timeout_sec": 0, 00:44:18.260 "reconnect_delay_sec": 0, 00:44:18.260 "fast_io_fail_timeout_sec": 0, 00:44:18.260 "psk": "key0", 00:44:18.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:18.260 "hdgst": false, 00:44:18.260 "ddgst": false, 00:44:18.260 "multipath": "multipath" 00:44:18.260 } 00:44:18.260 }, 00:44:18.260 { 00:44:18.260 "method": "bdev_nvme_set_hotplug", 00:44:18.260 "params": { 00:44:18.260 "period_us": 100000, 00:44:18.260 "enable": false 00:44:18.260 } 00:44:18.260 }, 00:44:18.260 { 00:44:18.260 "method": "bdev_wait_for_examine" 00:44:18.260 } 00:44:18.260 ] 00:44:18.260 }, 00:44:18.260 { 00:44:18.260 "subsystem": "nbd", 00:44:18.260 "config": [] 00:44:18.260 } 00:44:18.260 ] 00:44:18.260 }' 00:44:18.260 16:50:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:18.260 16:50:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:18.260 16:50:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:18.260 [2024-12-16 16:50:06.795580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:18.260 [2024-12-16 16:50:06.795633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326879 ] 00:44:18.566 [2024-12-16 16:50:06.870489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:18.566 [2024-12-16 16:50:06.891482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:18.566 [2024-12-16 16:50:07.047178] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:19.131 16:50:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:19.131 16:50:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:19.131 16:50:07 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:19.131 16:50:07 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:19.131 16:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:19.389 16:50:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:19.389 16:50:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:19.389 16:50:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:19.389 16:50:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:19.389 16:50:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:19.389 16:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:19.389 16:50:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:19.647 16:50:08 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:19.647 16:50:08 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:19.647 16:50:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:19.647 16:50:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:19.647 16:50:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:19.647 16:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:19.647 16:50:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:19.647 16:50:08 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:19.647 16:50:08 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:19.647 16:50:08 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:19.647 16:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:19.905 16:50:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:19.905 16:50:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:19.905 16:50:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oCoRvuuqXL /tmp/tmp.qTD17WOH9P 00:44:19.905 16:50:08 keyring_file -- keyring/file.sh@20 -- # killprocess 1326879 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1326879 ']' 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1326879 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326879 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326879' 00:44:19.905 killing process with pid 1326879 00:44:19.905 16:50:08 keyring_file -- common/autotest_common.sh@973 -- # kill 1326879 00:44:19.905 Received shutdown signal, test time was about 1.000000 seconds 00:44:19.905 00:44:19.905 Latency(us) 00:44:19.905 [2024-12-16T15:50:08.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.906 [2024-12-16T15:50:08.515Z] =================================================================================================================== 00:44:19.906 [2024-12-16T15:50:08.515Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:19.906 16:50:08 keyring_file -- common/autotest_common.sh@978 -- # wait 1326879 00:44:20.164 16:50:08 keyring_file -- keyring/file.sh@21 -- # killprocess 1325393 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325393 ']' 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325393 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325393 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325393' 00:44:20.164 killing process with pid 1325393 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@973 -- # kill 1325393 00:44:20.164 16:50:08 keyring_file -- common/autotest_common.sh@978 -- # wait 1325393 00:44:20.424 00:44:20.424 real 0m11.670s 00:44:20.424 user 0m29.115s 00:44:20.424 sys 0m2.636s 00:44:20.424 16:50:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:20.424 16:50:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:20.424 ************************************ 00:44:20.424 END TEST keyring_file 00:44:20.424 ************************************ 00:44:20.424 16:50:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:20.424 16:50:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:20.424 16:50:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:20.424 16:50:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:20.424 16:50:09 -- common/autotest_common.sh@10 -- # set +x 00:44:20.684 ************************************ 00:44:20.684 START TEST keyring_linux 00:44:20.684 ************************************ 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:20.684 Joined session keyring: 283933139 00:44:20.684 * Looking for test storage... 00:44:20.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:20.684 16:50:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:20.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:20.684 --rc genhtml_branch_coverage=1 00:44:20.684 --rc genhtml_function_coverage=1 00:44:20.684 --rc genhtml_legend=1 00:44:20.684 --rc geninfo_all_blocks=1 00:44:20.684 --rc geninfo_unexecuted_blocks=1 00:44:20.684 00:44:20.684 ' 00:44:20.684 16:50:09 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:20.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:20.684 --rc genhtml_branch_coverage=1 00:44:20.684 --rc genhtml_function_coverage=1 00:44:20.684 --rc genhtml_legend=1 00:44:20.685 --rc geninfo_all_blocks=1 00:44:20.685 --rc geninfo_unexecuted_blocks=1 00:44:20.685 00:44:20.685 ' 00:44:20.685 16:50:09 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:20.685 --rc genhtml_branch_coverage=1 00:44:20.685 --rc genhtml_function_coverage=1 00:44:20.685 --rc genhtml_legend=1 00:44:20.685 --rc geninfo_all_blocks=1 00:44:20.685 --rc geninfo_unexecuted_blocks=1 00:44:20.685 00:44:20.685 ' 00:44:20.685 16:50:09 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:20.685 --rc genhtml_branch_coverage=1 00:44:20.685 --rc genhtml_function_coverage=1 00:44:20.685 --rc genhtml_legend=1 00:44:20.685 --rc geninfo_all_blocks=1 00:44:20.685 --rc geninfo_unexecuted_blocks=1 00:44:20.685 00:44:20.685 ' 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:20.685 16:50:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:20.685 16:50:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:20.685 16:50:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:20.685 16:50:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:20.685 16:50:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:20.685 16:50:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:20.685 16:50:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:20.685 16:50:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:20.685 16:50:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:20.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:20.685 16:50:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:20.685 16:50:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:20.685 16:50:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:20.944 /tmp/:spdk-test:key0 00:44:20.944 16:50:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:20.944 16:50:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:20.944 16:50:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:20.944 16:50:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:20.944 16:50:09 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:20.944 16:50:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:20.944 16:50:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:20.944 16:50:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:20.944 /tmp/:spdk-test:key1 00:44:20.944 16:50:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1327424 00:44:20.944 16:50:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1327424 00:44:20.944 16:50:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:20.944 16:50:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327424 ']' 00:44:20.944 16:50:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:20.944 16:50:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:20.944 16:50:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:20.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:20.944 16:50:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:20.944 16:50:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:20.944 [2024-12-16 16:50:09.416347] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:20.944 [2024-12-16 16:50:09.416393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327424 ] 00:44:20.944 [2024-12-16 16:50:09.491764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:20.944 [2024-12-16 16:50:09.514237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:21.204 16:50:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:21.204 [2024-12-16 16:50:09.715510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:21.204 null0 00:44:21.204 [2024-12-16 16:50:09.747558] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:21.204 [2024-12-16 16:50:09.747826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.204 16:50:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:21.204 461282022 00:44:21.204 16:50:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:21.204 667798006 00:44:21.204 16:50:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1327432 00:44:21.204 16:50:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1327432 /var/tmp/bperf.sock 00:44:21.204 16:50:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327432 ']' 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:21.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:21.204 16:50:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:21.462 [2024-12-16 16:50:09.822824] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:21.462 [2024-12-16 16:50:09.822865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327432 ] 00:44:21.462 [2024-12-16 16:50:09.899479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:21.462 [2024-12-16 16:50:09.921795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:21.462 16:50:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:21.462 16:50:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:21.462 16:50:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:21.463 16:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:21.721 16:50:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:21.721 16:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:21.979 16:50:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:21.979 16:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:22.237 [2024-12-16 16:50:10.594654] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:22.237 nvme0n1 00:44:22.237 16:50:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:22.237 16:50:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:22.237 16:50:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:22.237 16:50:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:22.237 16:50:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:22.237 16:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:22.496 16:50:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:22.496 16:50:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:22.496 16:50:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:22.496 16:50:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:22.496 16:50:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:22.496 16:50:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:22.496 16:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@25 -- # sn=461282022 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 461282022 == \4\6\1\2\8\2\0\2\2 ]] 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 461282022 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:22.496 16:50:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:22.754 Running I/O for 1 seconds... 00:44:23.689 21760.00 IOPS, 85.00 MiB/s 00:44:23.689 Latency(us) 00:44:23.689 [2024-12-16T15:50:12.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:23.689 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:23.689 nvme0n1 : 1.01 21757.97 84.99 0.00 0.00 5863.53 1880.26 7084.13 00:44:23.689 [2024-12-16T15:50:12.298Z] =================================================================================================================== 00:44:23.689 [2024-12-16T15:50:12.298Z] Total : 21757.97 84.99 0.00 0.00 5863.53 1880.26 7084.13 00:44:23.689 { 00:44:23.689 "results": [ 00:44:23.689 { 00:44:23.689 "job": "nvme0n1", 00:44:23.689 "core_mask": "0x2", 00:44:23.689 "workload": "randread", 00:44:23.689 "status": "finished", 00:44:23.689 "queue_depth": 128, 00:44:23.689 "io_size": 4096, 00:44:23.689 "runtime": 1.005976, 00:44:23.689 "iops": 21757.974345312414, 00:44:23.689 "mibps": 84.99208728637662, 00:44:23.689 "io_failed": 0, 00:44:23.689 "io_timeout": 0, 00:44:23.689 "avg_latency_us": 5863.530292397661, 00:44:23.689 "min_latency_us": 1880.2590476190476, 00:44:23.689 "max_latency_us": 7084.129523809524 00:44:23.689 } 00:44:23.689 ], 00:44:23.689 "core_count": 1 00:44:23.689 } 00:44:23.689 16:50:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:23.689 16:50:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:23.947 16:50:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:23.947 16:50:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:23.947 16:50:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:23.947 16:50:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:23.947 16:50:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:23.947 16:50:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:24.206 16:50:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:24.206 16:50:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:24.206 16:50:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:24.206 16:50:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:24.206 16:50:12 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:24.206 16:50:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:24.206 [2024-12-16 16:50:12.806503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:24.206 [2024-12-16 16:50:12.807374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a753d0 (107): Transport endpoint is not connected 00:44:24.206 [2024-12-16 16:50:12.808369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a753d0 (9): Bad file descriptor 00:44:24.206 [2024-12-16 16:50:12.809371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:24.206 [2024-12-16 16:50:12.809390] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:24.206 [2024-12-16 16:50:12.809402] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:24.206 [2024-12-16 16:50:12.809413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:24.206 request: 00:44:24.206 { 00:44:24.206 "name": "nvme0", 00:44:24.206 "trtype": "tcp", 00:44:24.206 "traddr": "127.0.0.1", 00:44:24.206 "adrfam": "ipv4", 00:44:24.206 "trsvcid": "4420", 00:44:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:24.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:24.206 "prchk_reftag": false, 00:44:24.206 "prchk_guard": false, 00:44:24.206 "hdgst": false, 00:44:24.206 "ddgst": false, 00:44:24.206 "psk": ":spdk-test:key1", 00:44:24.206 "allow_unrecognized_csi": false, 00:44:24.206 "method": "bdev_nvme_attach_controller", 00:44:24.206 "req_id": 1 00:44:24.206 } 00:44:24.206 Got JSON-RPC error response 00:44:24.206 response: 00:44:24.206 { 00:44:24.206 "code": -5, 00:44:24.206 "message": "Input/output error" 00:44:24.206 } 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@33 -- # sn=461282022 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 461282022 00:44:24.465 1 links removed 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@33 -- # sn=667798006 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 667798006 00:44:24.465 1 links removed 00:44:24.465 16:50:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1327432 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327432 ']' 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327432 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327432 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327432' 00:44:24.465 killing process with pid 1327432 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327432 00:44:24.465 Received shutdown signal, test time was about 1.000000 seconds 00:44:24.465 00:44:24.465 Latency(us) 00:44:24.465 [2024-12-16T15:50:13.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:24.465 [2024-12-16T15:50:13.074Z] =================================================================================================================== 00:44:24.465 [2024-12-16T15:50:13.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:24.465 16:50:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327432 00:44:24.465 16:50:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1327424 00:44:24.465 16:50:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327424 ']' 00:44:24.465 16:50:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327424 00:44:24.465 16:50:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:24.465 16:50:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:24.465 16:50:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327424 00:44:24.724 16:50:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:24.724 16:50:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:24.724 16:50:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327424' 00:44:24.724 killing process with pid 1327424 00:44:24.724 16:50:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327424 00:44:24.724 16:50:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327424 00:44:24.983 00:44:24.983 real 0m4.336s 00:44:24.983 user 0m8.230s 00:44:24.983 sys 0m1.435s 00:44:24.983 16:50:13 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:24.983 16:50:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:24.983 ************************************ 00:44:24.983 END TEST keyring_linux 00:44:24.983 ************************************ 00:44:24.983 16:50:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:24.983 16:50:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:24.983 16:50:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:24.983 16:50:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:24.983 16:50:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:24.983 16:50:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:24.983 16:50:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:24.983 16:50:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:24.983 16:50:13 -- common/autotest_common.sh@10 -- # set +x 00:44:24.983 16:50:13 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:24.984 16:50:13 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:24.984 16:50:13 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:24.984 16:50:13 -- common/autotest_common.sh@10 -- # set +x 00:44:30.262 INFO: APP EXITING 00:44:30.262 INFO: killing all VMs 00:44:30.262 INFO: killing vhost app 00:44:30.262 INFO: EXIT DONE 00:44:32.809 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:33.068 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:33.068 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:33.328 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:33.328 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:33.328 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:33.328 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:36.625 Cleaning 00:44:36.625 Removing: /var/run/dpdk/spdk0/config 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:36.625 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:36.625 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:36.625 Removing: /var/run/dpdk/spdk1/config 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:36.625 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:36.625 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:36.625 Removing: /var/run/dpdk/spdk2/config 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:36.625 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:36.625 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:36.625 Removing: /var/run/dpdk/spdk3/config 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:36.625 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:36.625 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:36.625 Removing: /var/run/dpdk/spdk4/config 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:36.625 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:36.625 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:36.625 Removing: /dev/shm/bdev_svc_trace.1 00:44:36.625 Removing: /dev/shm/nvmf_trace.0 00:44:36.625 Removing: /dev/shm/spdk_tgt_trace.pid771473 00:44:36.625 Removing: /var/run/dpdk/spdk0 00:44:36.625 Removing: /var/run/dpdk/spdk1 00:44:36.625 Removing: /var/run/dpdk/spdk2 00:44:36.625 Removing: /var/run/dpdk/spdk3 00:44:36.625 Removing: /var/run/dpdk/spdk4 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1010228 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1014647 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1016211 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1017991 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1018217 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1018237 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1018464 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1018946 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1020676 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1021474 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1021959 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1024016 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1024486 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1024982 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1029149 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1034442 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1034443 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1034444 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1038285 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1042371 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1047239 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1082696 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1086536 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1092617 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1093801 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1095179 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1096479 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1100869 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1105135 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1109083 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1116330 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1116379 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1120954 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1121177 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1121395 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1121814 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1121847 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1123570 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1125282 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1126876 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1128573 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1130180 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1131740 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1137682 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1138240 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1139940 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1140957 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1146559 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1149190 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1154316 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1159539 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1168638 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1175488 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1175490 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1193695 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1194159 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1194767 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1195291 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1196011 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1196471 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1196931 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1197598 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1201580 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1201818 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1208260 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1208524 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1213683 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1217840 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1227335 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1227807 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1231957 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1232204 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1236365 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1241885 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1244397 00:44:36.625 Removing: /var/run/dpdk/spdk_pid1254653 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1263371 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1264930 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1265823 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1281444 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1285183 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1287913 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1295569 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1295577 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1300649 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1302953 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1304765 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1305888 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1307812 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1308846 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1317413 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1317859 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1318308 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1320641 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1321190 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1321642 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1325393 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1325403 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1326879 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1327424 00:44:36.626 Removing: /var/run/dpdk/spdk_pid1327432 00:44:36.626 Removing: /var/run/dpdk/spdk_pid769371 00:44:36.626 Removing: /var/run/dpdk/spdk_pid770417 00:44:36.626 Removing: /var/run/dpdk/spdk_pid771473 00:44:36.626 Removing: /var/run/dpdk/spdk_pid772096 00:44:36.626 Removing: /var/run/dpdk/spdk_pid773018 00:44:36.886 Removing: /var/run/dpdk/spdk_pid773148 00:44:36.886 Removing: /var/run/dpdk/spdk_pid774207 00:44:36.886 Removing: /var/run/dpdk/spdk_pid774213 00:44:36.886 Removing: /var/run/dpdk/spdk_pid774559 00:44:36.886 Removing: /var/run/dpdk/spdk_pid776042 00:44:36.886 Removing: /var/run/dpdk/spdk_pid777331 00:44:36.886 Removing: /var/run/dpdk/spdk_pid777783 00:44:36.886 Removing: /var/run/dpdk/spdk_pid777959 00:44:36.886 Removing: /var/run/dpdk/spdk_pid778157 00:44:36.886 Removing: /var/run/dpdk/spdk_pid778441 00:44:36.886 Removing: /var/run/dpdk/spdk_pid778704 00:44:36.886 Removing: /var/run/dpdk/spdk_pid778950 00:44:36.886 Removing: /var/run/dpdk/spdk_pid779231 00:44:36.886 Removing: /var/run/dpdk/spdk_pid779951 00:44:36.886 Removing: /var/run/dpdk/spdk_pid782881 00:44:36.886 Removing: /var/run/dpdk/spdk_pid783131 00:44:36.886 Removing: /var/run/dpdk/spdk_pid783379 00:44:36.886 Removing: /var/run/dpdk/spdk_pid783388 00:44:36.886 Removing: /var/run/dpdk/spdk_pid783878 00:44:36.886 Removing: /var/run/dpdk/spdk_pid783890 00:44:36.886 Removing: /var/run/dpdk/spdk_pid784364 00:44:36.886 Removing: /var/run/dpdk/spdk_pid784371 00:44:36.886 Removing: /var/run/dpdk/spdk_pid784647 00:44:36.886 Removing: /var/run/dpdk/spdk_pid784843 00:44:36.886 Removing: /var/run/dpdk/spdk_pid784966 00:44:36.886 Removing: /var/run/dpdk/spdk_pid785101 00:44:36.886 Removing: /var/run/dpdk/spdk_pid785559 00:44:36.886 Removing: /var/run/dpdk/spdk_pid785733 00:44:36.886 Removing: /var/run/dpdk/spdk_pid786064 00:44:36.886 Removing: /var/run/dpdk/spdk_pid789970 00:44:36.886 Removing: /var/run/dpdk/spdk_pid794550 00:44:36.886 Removing: /var/run/dpdk/spdk_pid804625 00:44:36.886 Removing: /var/run/dpdk/spdk_pid805304 00:44:36.886 Removing: /var/run/dpdk/spdk_pid809499 00:44:36.886 Removing: /var/run/dpdk/spdk_pid809743 00:44:36.886 Removing: /var/run/dpdk/spdk_pid813931 00:44:36.886 Removing: /var/run/dpdk/spdk_pid819736 00:44:36.886 Removing: /var/run/dpdk/spdk_pid822434 00:44:36.886 Removing: /var/run/dpdk/spdk_pid832529 00:44:36.886 Removing: /var/run/dpdk/spdk_pid841930 00:44:36.886 Removing: /var/run/dpdk/spdk_pid843714 00:44:36.886 Removing: /var/run/dpdk/spdk_pid844610 00:44:36.886 Removing: /var/run/dpdk/spdk_pid861157 00:44:36.886 Removing: /var/run/dpdk/spdk_pid865153 00:44:36.886 Removing: /var/run/dpdk/spdk_pid947024 00:44:36.886 Removing: /var/run/dpdk/spdk_pid952148 00:44:36.886 Removing: /var/run/dpdk/spdk_pid957946 00:44:36.886 Removing: /var/run/dpdk/spdk_pid964293 00:44:36.886 Removing: /var/run/dpdk/spdk_pid964341 00:44:36.886 Removing: /var/run/dpdk/spdk_pid965210 00:44:36.886 Removing: /var/run/dpdk/spdk_pid966387 00:44:36.886 Removing: /var/run/dpdk/spdk_pid967276 00:44:36.886 Removing: /var/run/dpdk/spdk_pid967879 00:44:36.886 Removing: /var/run/dpdk/spdk_pid967952 00:44:36.886 Removing: /var/run/dpdk/spdk_pid968176 00:44:36.886 Removing: /var/run/dpdk/spdk_pid968192 00:44:36.886 Removing: /var/run/dpdk/spdk_pid968301 00:44:36.886 Removing: /var/run/dpdk/spdk_pid969088 00:44:36.886 Removing: /var/run/dpdk/spdk_pid969972 00:44:36.886 Removing: /var/run/dpdk/spdk_pid970862 00:44:36.886 Removing: /var/run/dpdk/spdk_pid971344 00:44:36.886 Removing: /var/run/dpdk/spdk_pid971516 00:44:36.886 Removing: /var/run/dpdk/spdk_pid971752 00:44:36.886 Removing: /var/run/dpdk/spdk_pid972747 00:44:36.886 Removing: /var/run/dpdk/spdk_pid973710 00:44:36.886 Removing: /var/run/dpdk/spdk_pid981813 00:44:36.886 Clean 00:44:37.145 16:50:25 -- common/autotest_common.sh@1453 -- # return 0 00:44:37.145 16:50:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:37.145 16:50:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:37.146 16:50:25 -- common/autotest_common.sh@10 -- # set +x 00:44:37.146 16:50:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:37.146 16:50:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:37.146 16:50:25 -- common/autotest_common.sh@10 -- # set +x 00:44:37.146 16:50:25 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:37.146 16:50:25 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:37.146 16:50:25 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:37.146 16:50:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:37.146 16:50:25 -- spdk/autotest.sh@398 -- # hostname 00:44:37.146 16:50:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:37.405 geninfo: WARNING: invalid characters removed from testname! 00:44:59.361 16:50:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:00.742 16:50:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:02.650 16:50:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:04.557 16:50:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:06.464 16:50:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:08.369 16:50:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:10.297 16:50:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:10.297 16:50:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:10.297 16:50:58 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:10.297 16:50:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:10.297 16:50:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:10.297 16:50:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:10.297 + [[ -n 674694 ]] 00:45:10.297 + sudo kill 674694 00:45:10.310 [Pipeline] } 00:45:10.324 [Pipeline] // stage 00:45:10.329 [Pipeline] } 00:45:10.344 [Pipeline] // timeout 00:45:10.349 [Pipeline] } 00:45:10.362 [Pipeline] // catchError 00:45:10.367 [Pipeline] } 00:45:10.381 [Pipeline] // wrap 00:45:10.387 [Pipeline] } 00:45:10.399 [Pipeline] // catchError 00:45:10.408 [Pipeline] stage 00:45:10.410 [Pipeline] { (Epilogue) 00:45:10.422 [Pipeline] catchError 00:45:10.424 [Pipeline] { 00:45:10.437 [Pipeline] echo 00:45:10.438 Cleanup processes 00:45:10.444 [Pipeline] sh 00:45:10.738 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:10.738 1339118 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:10.752 [Pipeline] sh 00:45:11.039 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:11.039 ++ grep -v 'sudo pgrep' 00:45:11.039 ++ awk '{print $1}' 00:45:11.039 + sudo kill -9 00:45:11.039 + true 00:45:11.051 [Pipeline] sh 00:45:11.336 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:23.582 [Pipeline] sh 00:45:23.868 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:23.868 Artifacts sizes are good 00:45:23.883 [Pipeline] archiveArtifacts 00:45:23.890 Archiving artifacts 00:45:24.047 [Pipeline] sh 00:45:24.334 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:24.348 [Pipeline] cleanWs 00:45:24.358 [WS-CLEANUP] Deleting project workspace... 00:45:24.358 [WS-CLEANUP] Deferred wipeout is used... 00:45:24.365 [WS-CLEANUP] done 00:45:24.367 [Pipeline] } 00:45:24.384 [Pipeline] // catchError 00:45:24.396 [Pipeline] sh 00:45:24.691 + logger -p user.info -t JENKINS-CI 00:45:24.734 [Pipeline] } 00:45:24.747 [Pipeline] // stage 00:45:24.752 [Pipeline] } 00:45:24.766 [Pipeline] // node 00:45:24.771 [Pipeline] End of Pipeline 00:45:24.826 Finished: SUCCESS